Friday, September 25, 2009

We've released our first Layar: Peaks

We've finally released our first layar (see here) "Peaks" - a layer giving names to the mountain you are looking at using the Layar application on Android. Enjoy the nature with knowing more about it!

Finally, a programmable camera

Researchers at Standford are currently building an open-source camera, Camera 2.0, where the firmware can be completely designed by the user. Instead of being stuck with features and effects coming from manufacturers end-user development can create much richer services. Examples given are image recognition and hints about the appearance of a setting in popular online galleries indicating the importance of shot before the shutter is released.

I also could imagine to add meta-information or to even implement a photoguide taking to photographers to the most scenic spots in a city and teaching them how to really do nice shots. Would be great to have access to that platform soon.
Finally, I really like the lovely shot of Zurich they use in the video above;).

Ambient features beat GPS indoors


Surroundsense [1] implements the simple idea of integrating various ambient features of an indoor location, such as sound, light, and acceleration pattern of the user. The authors present architecture which they test successfully to distinguish between 51 different business locations. In order to deploy that one should check whether assumption that the ambience of places remains stationary really holds. It also remains unclear to me how to actually train such a system...

[1] SurroundSense: Mobile Phone Localization Via Ambience Fingerprinting
Martin Azizyan, Ionut Constandache, Romit Roy Choudhury
ACM MobiCom, September 2009.

Friday, September 18, 2009

Nice talks at MobileHCI09

Again I had the chance to visit MobilHCI09, despite the rather odd organization of the program in rather fragmented dual-tracks I enjoyed some nice talks. Overall, my impression was that especially the time slots of full papers were rather long which the presenters pretty often filled with sometimes tyring overviews of related work or other explanations of commonplaces.

In his talk about Glance Phone [1] Richard Harper embeded quite provocative message asking why mobilehci research would focus so much on human-device instead of the more interesting communication human-human supported by technology. He underlined that the very same message if conveyed through either shouting, whispering, bellowing or murmuring - normal mechanics of communicatons - get a completely different meaning and cannot be established today. The basic idea of glance phone is to allow to glance through someone's phone's front camera. This should enable the caller to have a better guess on the recipient's current situation whether it's approriate to call him or not. The implementation is running a webservice on the phone only allowing people in adress book can glance. The outcome of the study was quite unexpected. Instead of initial of purpose for the caller to detect appropriate moments to call their recipients, users rather used glance phone in unstressed, arty, funy and amusing moments to show off their status as done on facebook. Quite a nice lesson how a study can go wrong but still makes it as a paper.

Friendlee [2] showed how the rich activity with the intimate network (friends, family, relatives) could be derived from mobile phone interaction and represented by a weighted graph. This allows to re-arrange the phone book and to also search other friends via hops of close friends. I was wondering whether the interaction with others via phones is really a good indicator for intimacy if I think of re-curring conversation when trouble-shooting with public authorities or corporations. Would be nice if those would push out my friends and relatives down to the bottom of my phone book...

Johannes Schoening presented PhotoMap [3], a nice approach of how to capture you-are-here maps and turn them into gps-navigationable-maps on mobile phones. He proposed 2 point referencing and overlaying over google maps. A user study showed that slightly uncorrectly referenced were god enough. This was a really nice idea he also received the best-paper award for. What stroke me was when he mentioned that Apple denied their iphone app for their Appstore as they would replicate functionality of the iphone - does that mean their will be no further location and mappings apps!?

uWave[4] presented an evaluation of authentication through gestures. The authors found out that with visual disclosure of a gesture to others to mimic a gesture is rather simple. In order to increase security the authors proposed to additionally pressa button while a performing gesture in order to define start and end in a hidden way. Well...why not just pressing buttons - here comparision gesture vs. several buttons would be nice. Novelty is cool, but usability should be also key in order to show the superior properties of novelty to the established...
Finally, Stephan von Watzdorf, a Ph.D. of ours gave his first presentation at a conference. He discussed the ability of phones to be used as risk alert [5] devices. The results were that people see value that phones are suited due its always on and always with-us character. The results were based on the analysis of a survey.

[1] Richard Harper and Stuart Taylor, Glancephone – an exploration of human expression, In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 - 18, 2009). MobileHCI '09. [pdf]
[2] Ankolekar, A., Szabo, G., Luon, Y., Huberman, B. A., Wilkinson, D., and Wu, F. 2009. Friendlee: a mobile application for your social life. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 - 18, 2009). MobileHCI '09. [pdf]
[3] Johannes Schöning; Keith Cheverst; Markus Löchtefeld; Antonio Krüger; Michael Rohs; Faisal Taher: Photomap: Using Spontaneously taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices, In: Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Service. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 - 18, 2009). MobileHCI '09.[pdf]
[4] Liu, Jiayang; Zhong, lin; Wickramasuriya, Jehan; Vasudevan, Venu: User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 - 18, 2009). MobileHCI '09. [pdf]
[5] Watzdorf von, Stephan; Michahelles, Florian: Evaluating Mobile Phones as Risk Information Providers. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 - 18, 2009). MobileHCI '09. [pdf]

Jun Rekimoto's Keynote at MobileHCI09

Jun Rekimoto was giving examples of mixing reality with virtual world. As a quite illustrative example he showed how a fighting characters of card game could revive by virtually augmenting fighting capabilities - though this was nothing really new, Albrecht notes even more critically. Then he introduced sensonomy as concept to combine intentional and unintenional tagging of users, known from web (pagerank, social tagging), for the real world in order to increase the quality of location tracking using wifi. He showed from a simulation how location accuracy could be increased by user participation. He did not really talk about the motivation of users to do that which should be the pivotal requisite for that system.

The other topics of lifelogging, capturing environmental data from crowds were not new either, but in the end his example of pet lifelogging put some nice perspective. I'm quite convinced that people might enjoyunconventional pics taken by their cat, information about which other cats she meets derived from face recognition of others und to compute facebook-like relationships if combined with need-solving applications such as pet-tracking.

Augmented reality vs. tagging

Just recently I stumbled over some exciting videos showing how augmented reality on mobile phones can be used to overlay the real-world with additional information when seeing it through the mobile:
new

end

This looks fascinating on the one hand as it finally runs on mobile phones today, but the idea itself is also kind of old on the other hand:
old:

end

I can imagine that applying AR may make sense in specific situations but how do indicate to you users that there is virtual information "behind" the current real-world view? How frustrating would it be running through a city watching it entirely through your mobile phone without finding any augmented information?
In that sense I still find the approach of tagging places with barcodes or NFC more convincing, e.g. as ServTag or several NFC projects show.

Tuesday, September 8, 2009

Whitepaper: Mobile Advertising - 2020 Vision

Ogilvy and Acision have published a White Paper called "Mobile Advertising – 2020 Vision" exploring how mobile advertising will look in 2020 (download here). They lean back about the quality of predictions in general but envision some nice scenarios: a car advertisement (see to the right) can trigger different meanings for different users ranging from buying, renting, or tuning a car. Later they mention serendipity through ads of trusted advertisers, interaction between devices (e.g. buy a shirt an actor on TV is wearing through your mobile).
However, as an essential prerequisite the true adoption of mobile phone is required, not just doing geeking things but using it for continuously for boring routines in daily life. I still wonder how get there looking at the recent 'flat-rate' Swisscom has overed a few weeks ago: 169 CHF/month (!!!) for unlimited data and voice NOT including roaming...

Friday, September 4, 2009

Announcement: CfP - What can the Internet of Things do for the Citizen?

The workshop "What can the Internet of Things do for the Citizen?" (CIOT), Workshop @ Pervasive 2010, Helsinki, May 17, 2010, is accepting submissions.
I have the great pleasure to announce that our Pervasive workshop proposal "What can the Internet of Things do for the Citizen?" (CIOT) has been accepted at The Eighth International Conference on Pervasive Computing (Pervasive 2010).

We (Stephan Karpischek (ETH Zurich), Albrecht Schmidt (Univ. of Duisburg-Essen), and myself) are soliciting submissions describing applications, tackling infrastructure issues, introducing meaningful forms of interaction as well as articles discussing business scenarios that show the commercialization of Internet-of-Things applications for citizens. What if we had technology that gathered data from things of our daily lives, tracked and counted everything in order to solve citizens’ needs (e.g. reduce waste, prevent loss, and improve search)?

The reception of the call in the community was quite overwhelming such that we managed to organize a quite remarkable PC for this workshop.

Please find the detailed call here: www.autoidlabs.org/events/ciot2010

You may also express your interest on facebook.

We are looking forward to your submissions!

Interact 2009, Uppsala - Overall

As said in my first post, I found six sessions in parallel quite overwhelming. Thus, I - as all other visitors - only could attend a little fraction of the talks.

I enjoyed the contribution of Florian Alt who proposed a proxy-based implementation [1] to alter the content of website with or without the owners consent. This could increase usability of badly designed sites or, probably more important, to implement dynamic applications on static content, e.g. integrating infos from your social network with content of a website you're just looking at.

Next I fascinated by the simpled idea Paul Holleis [2] presented to attach a sequence of NFC tags behind a laptop screen in order to fascilitate touch-based interaction similar to a touch screen but at much lower cost. Obviously, one technically problem to be solved is to find screens that still allow the radio waves to permeate the screen.

Finally, I was glad to see the talk of Felix von Reischach [3], our Ph.D. student, presenting our work about the different ways of interacting with mobile devices and products: barcode vs. epc vs. nfc.


[1] F. Alt, A. Schmidt, R. Atterer, P. Holleis
Bringing Web 2.0 to the Old Web: A Platform for Parasitic Applications. Interact 2009. Uppsala, Sweden. 24-28 August 2009.
[2] Khoovirajsingh Seewoonauth, Enrico Rukzio, Robert Hardy and Paul Holleis. NFC-based Mobile Interactions with Direct-View Displays. Interact 2009. Uppsala, Sweden. 24-28 August 2009.
[3] F. von Reischach, F. Michahelles, D. Guinard, R. Adelmann, E. Fleisch, A. Schmidt: An Evaluation of Product Identification Techniques for Mobile Phones, Full Paper at the 12th IFIP TC13 Conference in Human-Computer Interaction (Interact2009), Sweden, August 2009, [PDF] [Talk].

Interact 2009, Uppsala - KeyNote 3: Liam Bannon, "Towards human-centred design"

Finally, after a great show on the evening before, the organizers of Interact 2009 had been smart enough to organize a third keynote to motivate people to visit the third day of the conference.
Liam Bannon talked about "Towards human-centred design".He started with the quite known fact of how ubiquitous computing has changed the interaction with technologies and, as such, human desktop no longer persists as the most dominant form of interaction. More interesting was his comment about how also industry has changed, he gave the example of a senior executive who has changed his instructions from "evaluate" (something the company has developed) over "develop" and "explore" to "come up with something interesting". Liam outlined herewith a clear shift from industry-driven to user-driven research.
Liam jumped a little bit cross topics in his talk, he also had a vast number of slides with "too much text" on them, as he admitted quite frequently throughout the talk. Anyway, took strong opposition against replacing humans by technology, as human skills are still relevant in technical systems, such that humans always should be the real actors. Well, who in the audience would have ever questioned that...
When Liam started to talk about ambient intelligence he attacked the vision of the all knowing systems prentending to operate on behalf of the user. He rather proposed to design system that extend human capabilities as also critiqued in Rob van Kranenburg's new online book The Internet of Things.
Then Liam did another jump to the topic of collecting data vs. forgetting information. Using Microsoft's MyLifeBits project he questioned the underlying assumption that collecting data is a good thing per-se. He emphasized that also forgetting is an important part of human life which also should be supported through technology, e.g. digital shelters.
The Liam jumped back to the previous topic of ambient intelligence and gave some good counter-examples of the stupid user always being supported by technology: user-generated content and open-source software just show the opposite, how the skilled users spread their ideas and collaborate through technology.

Human agency and technnologies have to come together. He referred to the Mc Namara-fallacy:
The first step is to measure whatever can be easily measured. This is OK as far as it goes.
The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading.
The third step is to presume that what can't be measured easily really isn't important. This is blindness.
The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.
I find this a quite remarkable counter-position towards high-resolution management.

Interact 2009, Uppsala - KeyNote 2: Nicklas Lundblad, "Lies, damn lies and privacy"

The second day of Interact2009 started with a keynote from Nicklas Lundblad, former European policy manager at Google and now deputy CEO of the Stockholm Chamber of Commerce.
Nicklas started he talk with the insight that security today is an established field in companies, where as privacy is not: usually there no designated role for privacy in companies. The most established answer towards privacy is privacy-enhanced technology (PET) which is mostly cumbersome, implemented as an add-on, and not really perceived useful: the more privacy, the less usability.
As an example, the Icognito mode of web browsers allows to browse the web without leaving traces of what you've visited: mostly used at public computers, but the significant majority is porn.

Accordingly, privacy always incorporates, whether true or not, that the user wants to hide something. Nicklas played a quite exaggerating movie about the 'google' opt-out village:

Google Opt Out Feature Lets Users Protect Privacy By Moving To Remote Village

Then Nicklas gave some background of some quite exciting interpretations of privacy:

1. sphere of gaze
if you always keep in mind that you are under the watchful gaze of god you will be fine.
Nicklas gave an example of Benton's prison: a disciplinary prison - where the prisoner is put in the center of sphere and can be watched from all sides around - which refers to a concept of god being watched, more details can be found here.

2. privacy as a mask:
Nicklas called this the Swedish interpretation of privacy, a personal integrity concept, which allow you to keep your mask in front of others. You can experience these masks when you compare pictures of your friends on linked-in and with their's on facebook.

3. Privacy as a game
Privacy is about learning and playing with it. You learn to handle it and to care about it.

In former times there was no privacy at all: we all lived in villages, privacy only came with urbanization. In the village you are naturally risk-avert, because of if you fail you're entire reputation is lost. In urbanization, you can move somewere else, you become more risk-alert which triggers economic growth.
A growing counter-culture of lie emerges.

Most well explored sector of online lying is online-dating, 20% self-reported to ly on line sites, 90% even think that others lie.

Now, after a long introduction finally the main topic of the talk appeared:
How can you build privacy based on lie.

It followed some philosphical discurs about lying and it turned out it's a social evolution.
Nicklas proposed to support people to ly by technology: e.g. don't reject people on Facebook but rather introduce a mechanism to allows you to express your intend to confirm without disclosing your information - that would be ly on faceook.
Another approach was to apply steganography, turn emails into spam, and by that hiding your information in spam which again could be a mechanism for privacy based on ly (is the spam actually is not spam!).

Obviously, this approach has some problems. First of all ethical, as ly could destroy the internet, as social trust disolves and structure disappears. However, Nicklas added, lying is not a monolithic concept: there are also"white lies" as "you're looking beautiful today" and Nicklas asked "is there even a right to be able to lie?".

The talk concluded with some challenges: how to build prototypes to support ly, how people lie with technology (spam filter), is lying changing society? (ly less in email, than mobile). His talk is even available:



The lessons of this talk are not to be implemented right, but it least this talk triggered some thoughts and discussions. I find it nicely illustrates how still human are in control and design technology towards their needs, even along their defiencies...

Interact 2009, Uppsala - KeyNote 1: Krista Höök "Mobile life - body and interaction"

Last week I could attend Interact2009. I was really thrilled by the huge variety in the program, six tracks in parallel - quite hard to not miss the most interesting talks;)
Anyway, at least the keynotes were in the plenary and not to miss:

First Krista Höök talked about the lasting challenge of coming up with new interfaces for new environments. She set out her talk with the confession that HCI research in the past would have lost it's relevance. As it has left the ground of reality needs.

The challenge - according to her - was to design interfaces in the wild for strange environments, e.g. video dj applications that allows to stream videos from various mobile phones' cameras of the audience to the big display [1]. At that point I was not sure whether Krista was pointing a way out of the crisis or rather wanted to illustrate the status of the HCI crisis.
She emphasized the difficulties of designing something new which might be even hard to describe in words, e.g. body expression. She proposed explanatory design, a playful way of associating technology with habits. She was talking about reptile owners that treat their animals as a living exhibition and she taking that as a motivation for designing a living wallpaper that gave birth to a flower fertilized by images the users had been uploading. Krista strongly voted for the ludic society, as social would be all life is about. Later in the discussion she even clearly outlined that supporting playful entertainment would be more important than solving a murder.
I really enjoyed when she was talking about the malleable experience and introduced Mobile2.0 [2], an environment that allows everybody to become a designer of pervasive games. She used the notion of "digital handicraft for all".
Finally, she mentioned the importance of the eco-systems of drivers, namely mobile network operators vs. google vs. application design. She proposed to start with the consumers' needs when designing applications. Interestingly, she didn't want to call the users 'users' but rather 'actors', as those themselves should apps in short cycles, as they know their problems best.
That was a nice message to follow, however, I didn't really see in the examples she gave herself following that advice.

[1] Engström, A., Esbjörnsson, M. and Juhlin, O. (2008). Mobile Collaborative Live Video Mixing. In Proceedings of MobileHCI 2008. ACM Press, pp. 157-166.
[2] Holmquist, L. E. 2007. Mobile 2.0. interactions 14, 2 (Mar. 2007), 46-47.