Identity Abuzz: OAuth

The community that concerns with Identity in the Web has had a very hectic month of April. Identity is the bedrock foundation of anything social – think 3rd-party value-add services rooted on the social graph that any one of the Twitter, Facebook, Linkedin, etc. expose and promote access to. Among various events, I single out Facebook’s F8 event as the catalyst for several announcements and specs that came out this month.

The emerging OAuth protocol is one of the most interesting sights in the Identosphere. OAuth enables 3rd party access to web resources without propagating or sharing passwords. It has been likened to a valet key, in that resource owners can delegate access along with an envelope of authorized actions.

I have been interested in OAuth for quite some time because it holds potential:

  • to stop to the password sprawl and make it less likely that passwords will be mishanded, either in users’ hands or in the back-end of some poorly managed IT or Clouds (as I observed here in the case of smartphones)
  • to curb phishing vectors by way of branded sign-in pages that the user is redirected to in a seamless user experience
  • to bring devices that are data-entry impaired (like my beloved Roku box) back into the fold of dependable authentication

The OAuth chronology goes like this:

  • Dec ‘07, OAuth 1.0 debuts
  • Vulns documented
  • June ‘09, OAuth 1.0a is introduced addressing vulns
  • Shortly afterwards, OAuth 1.0a implementations become available, chiefly Twitter’s
  • OAuth 1.0a is demonstrated on the iPhone platform, with applications like Flickit
  • May 2009, IETF OAuth Working Group is chartered in the IETF
  • November 2009, folks from Microsoft, Google and Yahoo introduce the OAuth Web Resource Authorization Protocol (WRAP) and contribute it to the IETF.  Chiefly, It standardizes on the creation and propagation of tokens over SSL (in lieu of signatures). Also, it codifies a number of use cases and roles. By far, I found this to be the best-written spec in the whole OAuth document series
  • April 2010, OAuth 1.1 becomes RFC 5849
  • April 2010, OAuth WRAP implementations are announced
  • April 2010, the first revision of the Oauth 2.0 Internet Draft is released; it builds upon both OAuth 1.0a and OAuth WRAP

I’m eager to see how OAuth will do vis a vis with these challenges:

  • Which impact: Will the OAuth protocol be universally implemented to the letter of the emerging IETF standard? Or will there be dialects, each producing an island of interoperability around a specific social graph like Twitter’s, Facebook’s,  Linkedin’s, etc.
  • Set proper expectations: OAuth will not rid us of phishing. There will still be rogue clients and exploits of the client callback URL. However, the risks will provably be contained to loosing the token in lieu of the password (the former being lower-grade security material than the latter)
  • Stand cross currents: XAuth (also announced in April!) and browser-specific solutions like Mozilla’s Account Manager pitch radically different solution points to the web identity challenge

I look forward to being at the Internet Identity unConference, May 17-19th, in Mtn View.

Leave a Comment

Cloud pulls crypto agendas

What a great monthly publication CACM is. In the 15 years that I’ve been a member of the ACM, this must be the time that I’m getting the most out of CACM (now in soft-copy as well for extra convenience). In recent issues, CACM has featured interesting crypto papers with a Cloud spin.

In the March issue, I dug into Craig Gentry’s paper on homomorphic encryption. In today’s Clouds, we cannot separate delegation of processing from delegation of cleartext access. Enter homomorphic crypto and, voila, we no longer need to question a Cloud provider’s aptitude to handle sensitive information. With this crypto, one can tap off-the-shelf public compute resources to do the Navier-Stokes for a new wing or process the interception tracks from some military sightings, yet without ever revealing a thing. In practice, however, I doubt that there are that many Cloud use cases begging for homomorphic crypto … once I take away those that belong in private Clouds anyhow (e.g., for SLA reasons) and those that can be simply dealt with via anonymization (e.g., for medical records), tokenization (e.g., for select PII elements), and simple tests for equality (for which standard crypto suffices). Regardless, this is one of those jaw-dropping results well worthy of a you-must-be-kidding-me reaction. I give Gentry plenty kudos for making his material highly accessible and engaging. In the pile of security papers that I have read over the years, Alice has never looked so good and crafty!

In the April Issue, I’m reading Sergey Yekhanin’s article on crypto protocols that protect the privacy of queries to public databases. It’s not an identity challenge. Rather, it’s about disguising the intention of a query or a set of queries. In the age of real-time analytics, it’s not far fetched that a database provider or a data aggregator in the Cloud manages to detect and then leverage mounting interest in a particular topic. Counter to that, the discipline of private information retrieval makes it hard or impossible to infer a subject’s intention at the expense of some communication and/or data overhead.

In both cases, I’m eager to see how these research results will be reduced to practice. The Cloud can dress up as transformational technology capable to pull through some powerful ideas.

Leave a Comment

10 Issues with smartphone apps

Someone best characterized application vs. platform in just a dozen words, as follows: A good application never surprises, a good platform never stops to surprise (I’d love to give proper credits, if someone is kind enough to provide me the citation).

I continue to be quite impressed with the two smartphone platforms that I dug into, iPhone and Android. They never stop to surprise me on the positive side with their nuggets of enabling technology.

I do have quite a few issues with their applications and the way they are written. Alas, they surprise me when and where they really shouldn’t. Here’s a list of 10 top of mind issues in no particular order:

  1. Unexpected entitlements. Some applications are more equal than others. For instance, try signing-out from your primary gmail account on Android. It won’t work unless the whole device is wiped clean;
  2. Power efficiency. Some applications turn the radio on very often and can even be quite chatty whenever they do so. In absence of a “green rating” for applications, it’s a trial and error process of loading some applications and then discovering that battery autonomy has suddenly tanked compliments of a “fat” application in that mix;
  3. Applications work unless they don’t. It’s hard to know why an application suddenly gets into the habit of aborting launch. It silently goes back to being a cute square icon, ready to fail again just the same;
  4. Stale coding practices. The application development environments don’t leverage any of the new ideas in software engineering, like Ruby on Rails with its built-in unit/functional testing;
  5. Bloomingdale’s and the bazaar. Paraphrasing E. Raymond, there seem to be just two styles of application store emerging: the exclusive velvety one (iTunes, Ovi) and the open messy one (Android). It would be nice to see some hybrid concepts emerging. It will be a pity if the smartphone software channels are already fully ossified this early in the game;
  6. Password sprawl. Without a widespread identity infrastructure, I’m forced to set passwords in as many different applications and have their renewal/challenges hanging on me. Intriguingly, the latter too change in frequency and style with the application, thus making it a really fragmented experience and a race towards lower grade security policies (i.e., simple passwords with the longest expiration intervals possible);
  7. Back-end password handling. Without a widespread identity infrastructure, chances are that for a given application the database of subject’s secrets and the subject’s application data get collocated into the same Cloud and the same logical slice therein. This is what my colleague Gunnar Peterson colorfully describes as loading dynamite and detonator onto the same truck;
  8. Porous sandboxes. The sandbox that an application operates in has several back-alley read/write access pathways to free-for-all data (e.g., the keyboard cache and address book on the iPhone, as described here), thus creating opportunities for Trojans and covert channels;
  9. Panta rei. After I stumble upon a really clever application and make it part of my daily life, it’s quite likely that another vendor will pick on the same good idea and apply some healthy one-upmanship to improve it. Thus, I regularly have the dilemma, whether to stick to the data accrued thus far or start fresh on a brand new application, without any migration capability in sight;
  10. Cloakers and phishers. Some applications mean big business and naturally attract ill-intentioned copycats. There are just so many pixels to copy. Current defenses are mainly non-technical – e.g., the presence in the iTune store hinges on relationships between vendor, Apple, and the user community. They are not as effective in the bazaar style of application store.

I don’t believe in the rise of mobile multi-platform application frameworks (other than WebKit, that is), nor do I believe in unicorns.

However, I’m firmly convinced that smartphones will pull through advances in software – be it on gadget, on cloud, or identity infrastructure  – much as they have already done for the 3G telco infrastructure.

Leave a Comment

Toh, Skype Publishes Codecs

My former colleagues have chosen to publish SILK in an IETF Internet-Draft. I can only imagine how this new resolve must have stirred some discussion among stakeholders. My kudos for the final outcome!

Leave a Comment

Two Thousand Ten’s Turing to Thacker

I cannot think of a more deserving recipient of the ACM Turing award than Chuck Thacker. I was actually surprised that he hadn’t been considered before for this high recognition. I’ve been tuned to his brilliant work since the days that I’ve studied the Alto at school. I chronicled my 2008 visit to Chuck and his research team at MSR SV here.

NOTE. In truth, the award announced today is a 2009 award. The title’s allitteration was too good to pass on though…

Leave a Comment

iPhone pulls through AT&T infrastructure

Like in a Petri dish, I keep observing how the iPhone single-handedly pulls the roadmap of a telco infrastructure. Both iPhone and AT&T wireless infrastructure are expanding at torrid pace and beyond the wildest imagination (to an outside observer like me at least). The reaction is amplified by Apple’s single-track mind to perfect a user experience and their exclusive deal with a carrier – in short, a monoculture. No ounce of pull force gets lost. The 1-2 jolt that has developed from Apple to AT&T is a new baseline for textbooks.

A recent report confirms that AT&T has done good in its intent to improve its 3G download/upload throughput. Improvements stem from the roll-out of HSPA 7.2 (besides the sheer new capacity thrown at the problem). Broad technology advances in beamforming, multiple-input multiple-output communications (MIMO) and orthogonal frequency division multiplexing hint that there’s quite a headroom for further scale-outs over the next 3-5 years.

I’ve sampled the AT&T improvements directly using the excellent, free Xtreme Speedtest application. For extra credit, I can go multi-platform and run this same application at the same place and time on both my iPhone/AT&T and Droid/Verizon. The speed of a web browsing session would otherwise be highly subjective and dominated by the browser’s own effectiveness.

In a previous blog, I described the “wheel of innovation” looping over the following steps:

  1. New infrastructure build-outs
  2. Leading to faster/broader connectivity
  3. Making it a breeding ground for new applications
  4. Some of them reaching viral spread, network effect, etc. resulting in larger addressable markets
  5. Thus creating demand for more/different infrastructure

(loop back to 1.)

We have gone from step 5 to steps 1 to 2 (even though I have no basis to comment on coverage – I will steer clear of blue vs. red maps…) Now that the infrastructure shortcomings are beyond us, along with troubling rumors of usage tarifs, I’m eager to see a new breed of applications (steps 3 and 4).

In a subsequent post, I will share my wish list on what iPhone and smartphones in general can and should pull through in software infrastructure.

Leave a Comment

Berkeley BEARS Symposium

Ever since I moved to the left coast, UC Berkeley has become the most frequent destination of my research outings (it used to be MIT when I lived in Boston). I’m a regular guest at their RADlab retreats. Yesterday, I joined the 1-day Berkeley EECS Annual Research Symposium (BEARS). The morning was packed with four first-rate keynotes and a panel:

The future of devices, Elad Alon. Nano-electromechanical relays are a promising alternative to CMOS-based technologies and their unavoidable energy leakage. Like any other relay, nano-relays are leakage-free albeit much slower than CMOS and not as reliable. To mitigate these side effects, Elad is looking into more complex logic circuits and the opportunity to exploit parallelism (like in a N-bit adder or an ADC/DAC).

The future of computation, Kurt Keutzer. Deeper pipelining is not sustainable, parallelism is the saving grace. For this, Intel Larrabee and Nvidia Fermi are hugely exciting new processors. But how do we change the code to leverage the new silicon? There is early indication that algorithm/code conversion pays off with up to 100x improvements to time-to-result (teams started off from commodity software, like public domain support vector machines libraries – libsvm). Kurt did a great job at describing the whole ecosystem of parallel and show why/how it’s labor intensive. We need more/better frameworks to absorb these costs.

The future of Mobile, Eric Brewer. iPhone has converged dozen gadgets into just one (and more so every day). Inside, there are many discrete HW components taking up space and power, hinting that smartphones can either shrink further or carry more logic into them. Access is the smartphone’s killer app. Increasingly, mobile is a key factor in developing countries. There, it can save lives (e.g, a cellphone “microscope” contraption to detect malaria in the field; a diagnostic device connecting heart monitor and other sensors via the headset jack). The SIM card may become a good, universal place to store a private key. In developing countries, this setup actually works quite well because it’s already common practice for folks to own a SIM card and share a physical phone. Within every country, there’s a growing digital divide between urban and rural connectivity, with impact to just as many aspects of life as mobile touches.

The future of the Cloud, Michael Franklin. Cloud momentum will continue to be fueled by these value props: variable cost, cost associativity (1000 CPUs for 1 hr same as 1 CPU for 1000 hrs), risk transfer, and get the IT gatekeepers out of the way. There will be more devices and more virtual resources joining the cloud, including mechanical turks seamlessly blended in. Quite fittingly, there will be a new program at UCB to best harmonize Algorithms, Machines, People (AMP). It will launch in Jan 2011 when RADlab wraps up.

Energy panel hosted by Greg Papadopoulos. Can we innovate in energy the same way we innovated in technology? Three principles that served us really well in EECS and are worth cross-pollinating into energy are: a) layer decoupling, b) distributed innovation, and c) best equip for en-masse customization. A smart power grid is a dumb grid with many different smart endpoints. Some food for thoughts: Make solar panels become as cheap as a sheet of glass; Do nothing well (i.e., energy proportionality); Don’t recycle, up cycle.

The day was nicely complemented by open houses in the various departments, with plenty posters and demos. For ease of tech transfer to my children, I single out the demo of the software-intensive Starmac quadrorotor flying machines by the Berkeley Sensor and Actuator Center (see really cool videos 1, 2, 3 … heck, thou shalt see cool toys, green grass and the blue sky, once you’ve survived those pesky 3D Fourier transforms :)

Leave a Comment

Web-track me if you can

This week, slashdot called my attention to EFF’s effort to level set the community on web tracking — how unique (and traceable) does my browser make me look when I visit a web site?  This new EFF site returns my overall score along with the break down of its factors (like plugin details, screen size, system fonts, cookie handling). For instance, it tells me that the Safari fingerprint generated off of my Mac is still unique among the half-million fingerprints on file at the EFF.

This is a great example of crowd-sourcing at work. The more participants, the better the study. EFF’s work gets a huge boost from being slashdotted. Moreover, EFF is no .com and doesn’t  have the halo of big-brother or world domination.

How does one know when the samples have hit a critical mass leading to a reasonably accurate model? It’s a recurring conundrum for both frequentists and Bayesians.

I agree with EFF’s view that a smartphone’s browser is due to show lesser entropy. That kind of browser is less likely to veer from stock config. To witness, my iPhone browser scored 1 in 1,442 uniqueness (10.49-bit entropy) and my Android browser scored 1 in 8,513 uniqueness (13.06-bit entropy). To the previous point, it’s unclear how many smartphones have hit the EFF site altogether.

This smartphone/browser early conclusion should not be generalized to native apps running on a smartphone. These native apps can yield the richest fingerprint features yet. They can draw upon some sophisticated UUID and TPM schema in system software, with the SDKs exposing programmatic access, resulting in stronger software/hardware linkages than their desktop/laptop equivalents. Today, the limiting factors here have to do with policy – e.g., a vendor’s authorization to export off-device the UUID material that is key to its own DRM.

Leave a Comment


The word generativity jumps at me while I’m reading Jonathan Zittrain’s new book, “The Future of the Internet – and How to Stop it”. Zittrain defines generativity as  a “system’s capacity to produce unanticipated change through contributions from broad and varied audiences”. Internet, PC, wiki/wikipedia best exemplify generativity. It’s “generativity” what I had in mind and tried to say when I wrote about Internet’s virtuous wheel of innovation.

Generativity hits home. It’s the reason why I’m so genuinely interested in the Android platform (I got to carry one such phone alongside my iPhone). It’s why I put my TV set in early retirement and replaced it with an Internet-enabled one equipped with widget SDK – a generative TV in the making, hopefully. I know that I have given and will be giving my 150% in those jobs that have to do with generative artifacts (luckily, I have had a few of those jobs throughout my career).

Generativity is quite a litmus test for new directions in technology. Take cloud computing. Does it mark a new epoch in generativity? Or is this a mere TCO optimizer?

For sure, security, regulations, net-neutrality pose some great challenges to our collective journey in generativity. I look forward to reading the second half of Zittrain’s book and learning about his proposed solutions.

Zittrain came to visit us at eBay and gave an excellent lecture on “Minds for Sale” — an eye opener on both the positive and negative outcomes of long-tail participation in cyberspace.

Leave a Comment

Teach programming to your littl’ digital natives

In my monthly CACM issue, I found a delightful and somewhat unusual article on “Scratch“. With Scratch, Mitch Resnick et al.  at the MIT Media Lab have created a programming environment with the lowest up front investment for children and teenagers. As you would expect in a platform that speaks to digital natives, Scratch comes with a host of rich media and social networking components built in.

My children love Scratch. They were able to program in Scratch and do things that appealed to them from the very first session. I like them to spend time with Scratch because it lifts the curtain on how computer games and digital entertainment work. It stimulates their creativity and a can-do attitude towards technology.

In the mid ’90s, I had the fortune to meet Mitch Resnick at the Media Lab. My company back then was a top tier sponsor. I saw the first prototypes of what became Lego Mindstorms (whose programming user experience put the early seeds for Scratch). It’s fascinating how Resnick repeatedly gets it. He might as well be the Steve Jobs of under age computer human interface.

Leave a Comment