They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

How did we get here?

At the core of the problem is the strong and unbridgeable conflict between the web and the enterprise worlds. The OAuth working group at the IETF started with strong web presence. But as the work dragged on (and on) past its first year, those web folks left along with every member of the original 1.0 community. The group that was left was largely all enterprise… and me.

The web community was looking for a protocol very much in-line with 1.0, with small improvement in areas that proved lacking: simplifying signature, adding a light identity layer, addressing native applications, adding more flows to accommodate new client types, and improving security. The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization. To understand the depth of the divide – in an early meeting the web folks wanted a flow optimized for in-browser clients while the enterprise folks wanted a flow using SAML assertions.

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesn’t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Under the Hood

To understand the issues in 2.0, you need to understand the core architectural changes from 1.0:

  • Unbounded tokens - In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues.
  • Bearer tokens  - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.
  • Expiring tokens - 2.0 tokens can expire and must be refreshed. This is the most significant change for client developers from 1.0 as they now need to implement token state management. The reason for token expiration is to accommodate self-encoded tokens – encrypted tokens which can be authenticated by the server without a database look-up. Because such tokens are self-encoded, they cannot be revoked and therefore must be short-lived to reduce their exposure. Whatever is gained from the removal of the signature is lost twice in the introduction of the token state management requirement.
  • Grant types - In 2.0, authorization grants are exchanged for access tokens. Grant is an abstract concept representing the end-user approval. It can be a code received after the user clicks ‘Approve’ on an access request, or the user’s actual username and password. The original idea behind grants was to enable multiple flows. 1.0 provides a single flow which aims to accommodate multiple client types. 2.0 adds significant amount of specialization for different client type.

Indecision Making

These changes are all manageable if put together in a well-defined protocol. But as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide. Here is a very short sample of the working group’s inability to agree:

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

On the other hand, 2.0 defines 4 new registries for extensions, along with additional extension points via URIs. The result is a flood of proposed extensions. But the real issues is that the working group could not define the real security properties of the protocol. This is clearly reflected in the security consideration section which is largely an exercise of hand waving. It is barely useful to security experts as a bullet point of things to pay attention to.

In fact, the working group has also produced a 70 pages document describing the 2.0 threat model which does attempt to provide additional information but suffers from the same fundamental problem: there isn’t an actual protocol to analyze.

Reality

In the real world, Facebook is still running on draft 12 from a year and a half ago, with absolutely no reason to update their implementation. After all, an updated 2.0 client written to work with Facebook’s implementation is unlikely to be useful with any other provider and vice-versa. OAuth 2.0 offers little to none code re-usability.

What 2.0 offers is a blueprint for an authorization protocol. As defined, it is largely useless and must be profiles into a working solution – and that is the enterprise way. The WS-* way. 2.0 provides a whole new frontier to sell consulting services and integration solutions.

The web does not need yet another security framework. It needs simple, well-defined, and narrowly suited protocols that will lead to improved security and increased interoperability. OAuth 2.0 fails to accomplish anything meaningful over the protocol it seeks to replace.

To Upgrade or Not to Upgrade

Over the past few months, many asked me if they should upgrade to 2.0 or which version of the protocol I recommend they implement. I don’t have a simple answer.

If you are currently using 1.0 successfully, ignore 2.0. It offers no real value over 1.0 (I’m guessing your client developers have already figured out 1.0 signatures by now).

If you are new to this space, and consider yourself a security expert, use 2.0 after careful examination of its features. If you are not an expert, either use 1.0 or copy the 2.0 implementation of a provider you trust to get it right (Facebook’s API documents are a good place to start). 2.0 is better for large scale, but if you are running a major operation, you probably have some security experts on site to figure it all out for you.

Now What?

I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise. A 2.1 that’s really 1.5. But that’s not going to happen at the IETF. That community is all about enterprise use cases and if you look at their other efforts like OpenID Connect (which too was a super simple proposal turned into almost a dozen complex specifications), they are not capable of simple.

I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

At the same time, I am expecting multiple new communities to come up with something else that is more in the spirit of 1.0 than 2.0, and where one use case is covered extremely well. OAuth 1.0 was all about small web startups looking to solve a well-defined problem they needed to solve fast. I honestly don’t know what use cases OAuth 2.0 is trying to solve any more.

Final Note

This is a sad conclusion to a once promising community. OAuth was the poster child of small, quick, and useful standards, produced outside standards bodies without all the process and legal overhead.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete.

Bringing OAuth to the IETF was a huge mistake. Not that the alternative (WRAP) would have been a better outcome, but at least it would have taken three less years to figure that out. I stuck around as long as I could stand it, to fight for what I thought was best for the web. I had nothing personally to gain from the decisions being made. At the end, one voice in opposition can slow things down, but can’t make a difference.

I failed.

We failed.

Some more thoughts…



  1. Mark Atwood says:

    I can’t decide if I should feel guilty for dropping out immediately after IETF San Francisco, or if I should feel grateful I didn’t waste any time on the OAuth 2.0 fight. Every now and then I would see it discussed at an IIW, and be invited back into looking at the mailing list again, but the energy and focus I loved about the OAuth 1.0 experience just wan’t there, and I could see all the crap forming (what you call the WS- way) that we intentionally kept chopping away when writing 1.0.

    Eren, when we meet again next, I will buy you a beer, for your heroic efforts.

    To all that come after, who have to implement 2.0, we are deeply sorry. It wasn’t our fault.

  2. Sad to hear! Thanks for all the hard work.

  3. Peter Verhas says:

    >a new source of revenues through customization

    This is your key point and the driver factor to fabricate this standard the way it gets as per your article. This goal is perfectly aimed and hit with the mix of details and the lack of that and with the anticipated coming different standards and non-standard solutions.

    • Jan Henkins says:

      Peter Verhas:

      What are you saying? Your points are very unclear…

    • JHaag says:

      Wow. I hope this is not an indication of how the new documentation will read.

    • Blessed Geek says:

      Peter, I have no idea what you are trying to say.
      Are you writing in human-readable encrypted code that only your intended audience would understand?

    • Ankit Soni says:

      I think I managed to decode it:

      Literal translation:
      “>a new source of revenues through customization
      The OAuth2 specification aimed to help enterprises get moar money, and it succeeded by writing a specification the way enterprises usually do: incredibly detailed while managing to impose no concrete restrictions/specifications”. I think the implication here is that OAuth2 is essentially meant to help enterprises cover their own ass should they get caught with their pants down. The specification is vague enough that it can be implemented shoddily, preferring time and flexibility over actual security if required, but the specification exists so that it can be blamed when something goes wrong. Excellent point.

      • Matt Tagg says:

        lol’d so hard at Peter Verhas’s comment, trying to understand wtf he was saying! He’s either an epic troll… or . :D

  4. Eran,

    we are in 2012, and there are two things we are sure of, a) large companies have a vested interest to fail standards (ebXML, WS-*, UML, BPMN/BPEL…), they mastered the process to achieve that goal, b) WYOS (write your own spec) works, I don’t see why a well written spec, with the feedback of its users/implementers can’t take off. You no longer need “them”

    • platypusfriend says:

      I loved this article, by the way. Just curious… what’s the benefit of failing UML?

  5. Jonathan S. says:

    First, I would like to commend you for admitting that, despite the hard work of yourself and others, OAuth 2.0 is a failure. It takes a strong person to admit failure these days.

    But we also should all acknowledge the biggest failure: the entire web stack. Today, it is nothing more than one hack layered upon another, from the very bottom to the very top. HTTP is full of miserables kludges, from its support for caching to its support for compression. Cookies are an ugly hack. SSL/TLS are another hack. But none of those are as bad as JavaScript, which is among the worst scripting language ever devised, even if people who only know JavaScript and PHP claim otherwise. HTML5 is a severe regression from rigor and sensibility of XHTML, and CSS has always made practical layout and styling far more difficult than it ever should be. Given that it’s a patchwork of hacks, the security is horrid.

    Security can’t be tacked on 15 or 20 years later, like has been attempted with OAuth. It is, of course, possible to semi-successfully add another hack to the existing pile of hacks that make up the web, like was done with OAuth pre-2.0. But it’s damn near impossible to go beyond mere hacks, as we’ve seen with OAuth 2.0. To make something like OAuth 2.0 succeed, you need to throw away essentially all of the existing web stack, and rebuild it from scratch, but taking security into account from the very beginning. If you did that, however, you’d find that hacks like OAuth wouldn’t even be needed in the end!

    • Eran Medan says:

      I agree with most of what you say, and the feeling is that the web stack is something that went out of control, I agree that CSS is not the simply joy I would expect it to be (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle) HTML5 is not answering many things we would hope it would be, you have too many JS frameworks (most of them are great) and even though I disagree about JavaScript (well, I’m just reading JavaScript the good parts, so I’m biased) I agree that having to console.log everything just to learn what this Ajax call returns (or reading the documentation if there are any) is a pain, which doesn’t really exist in Java / Scala or other strong typed languages. I was a strong supporter of Flex, and hoped JavaFX will succeed, but it turns out the web has life of it’s own, the developer community, the Rails one especially in my opinion, has created something very elegant in a world without elegance, things like SASS/SCSS/LESS, things like CoffeScript, like Ruby on Rails / Grails / Django / Play framework, and let’s not forget jQuery, and even Dojo and the commercial, but great sencha / ExtJS, the people who did the web standards might have made errors, but the enterprise, the standards organizations are just a little behind, that’s all, they have much more legacy stuff (people and machines) to satisfy. I think what needs to happen is to have the “hacker community” meet the “software engineer II” community and share ideas, concepts and not just be “us” and “them” corporate software developers solve hard, real problems, they just use XML and SOAP and XSLT and all those WS-* nonsense because they are a little behind, that’s all. and also because no one have shown them the right way, I work in a corporate, financial corporate, and people do want to learn, they do get excited hearing about Node.js, and about Thrift and protocol buffers, they like seeing the benefits of MongoDB, they appreciate the speed of Ruby on Rails / Grails and in some rare cases, they eventually adopt new things. I think instead of “hating Java” and overengineered software, hating windows SOAP and relational databases (except MySQL) people should just show the enterprise engineers that there are other ways, most of them simply don’t know as they have no time or simply think the world revolves around Java EE / Spring / Hibernate…

      • Anonymous says:

        Oh god, my kingdom for a few paragraph markers!

      • David Lawrence says:

        Just a helpful nudge with your js debugging (assuming you’re debugging in browser and not on a nodejs server app). Webkit developer tools, available in both safari and chrome, allow you to set breakpoints and debug your js much as you might debug compiled code in an ide. No need to console log all that stuff. I believe firebug in Firefox provides the same functionality but its been a long time since I’ve used it.

        • Scott M. says:

          If you are an Eclipse user then there are packages you can install which give you all the bells and whistles of other IDE’s while coding in javascript. Including syntax coloring, smart indenting, and best of all it includes an AST parser which gives you context sensitive information, aka intelisense.

      • platypusfriend says:

        (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle)

        …HA! Been there… so true

      • kjs3 says:

        Was there a point in there?

    • Chris Adams says:

      I used to feel similarly to you until I thought more about why so many heralded better technologies failed:

      XHTML failed because that “rigor and sensibility” never existed: in practice, XHTML was unusable for any real web site unless you gave up all hope of validation – due entirely to the various standards committees in the XML ecosystem trying to build in enormous amounts of complexity in an attempt to solve every problem in advance. In practice, this caused no problems except for people who wasted time trying to produce valid XHTML and the rest of the web converged on HTML5 because it solves problems authors actually have. It turns out that not penalizing people for moving ahead of a committee is a great way to make new things which normal people want to use

      HTTP might be miserably kludgey in your opinion but it’s solved more real problems than any other protocol in existence because of its simplicity and extensibility. Remember CORBA? Remember SOAP and the million WS-* protocols? All heralded as better ways to build Serious Applications but so complex that in practice no two implementations worked while simple HTTP applications work routinely on harder problems with an insane number of different clients.

      JavaScript: again, not great but it had enough flexibility that people have been able to bootstrap more advanced constructs than most critics thought were possible. Various self-proclaimed better languages tried to claim that position but again, they all failed because they added huge amounts of complexity to every application with little real benefit. Remember client-side Java, with 10-100x the code for any example, littered with AbstractFactoryFactory and developers at Sun full of hubris deciding that all existing graphics APIs were unusable so they needed to invent their own?

      As far as security goes, again, there are some areas for improvement but no other system in the history of computing has offered millions of people anything remotely as secure as the current web, warts and all.

      The simple fact is that building massive heterogenous distributed systems is hard: it’s easy to build castles in the air but actually building them takes longer and costs more than expected with generally little to show at the end. Much as people like to think otherwise, we’re simply not smart enough to build enormous systems top-to-bottom without constant real world feedback. You get that by making designs as simple as possible to solve a problem which real people are demonstrated to have and iterating quickly. Trying to redesign the web stack top to bottom is our field’s quest for perpetual motion.

      • Eric Liu says:

        Well said.

      • Bill Burke says:

        Javascript doesn’t count as there is no other feasible option in the browser.

      • KatieP says:

        Well said.. People keep on looking for a silver bullet for a complicated universe. And with the security not even built into the infrastructure, it is difficult to retro-fit. Protocol like this are great in theory, but there is no mentioning on the infrastructure which is required to supports it.

    • Martin O says:

      Yeah, I absolutely agree that Javascript is suboptimal, it’s unmaintainable and a pain to debug, and you don’t really want to use it for anything spanning more that 20-or-so lines of code. It also used to be slow but less so these days with V8 engines and optimizations.

      On the other hand .. machine language and assembly language are also a pain to maintain and debug, yet machine language is qlways down there at the very bottom of the tech stack somewhere. As I see Javascript, it’s evolving more and more into a standard “machine language” of the web. Not a perfect standard but something that you can build sensible higher-abstraction “compilers” on top of just in the same way the C took over when assembly language became unmanagable.

      Same could be said of HTML and CSS. Neither is perfect, both do the job, and if you don’t like the lack of structure, wrap it in something more structured.

    • Paul says:

      Yes! It’s hard to understand why an industry that claims to be quickly and rapidly be deploying new technologies is stuck with hacking up old and worn out protocols. X.509 is the best example – supposedly the core of web security … It’s 80′s technology that’s been patched and extended , but still uses the 80′s ideas of a grand Directory service. We need a clean slate.

    • Why do you think much of existence is exactly the same as what you describe? Layers and layers of good intentions that never quite achieved the “perfection” they aimed for and so ended up “macgyvered”. The world isn’t perfect, we aren’t perfect, so why do we think we can create something that is? It is the major stumbling block for a vast number of technically-minded persons to think that there is such a thing, and in my mind, is exactly what Eran Hammer laments in his post. To strive for something that is perfect for all, leaves you only with compromise.

      Success occurs online with structures that are either designed for a targeted audience or ones that can survive in a hacked, overloaded and misused state, that is because the majority of Life works in exactly the same manner. It is a coders urge (bordering on OCD) to always want to go back and “start again”, but when systems become so large and depended on, you will most likely send yourself mad in doing so.

      Take JavaScript as an example, it wasn’t constructed for the way it is used today – but it has survived because it can be used in so many different ways but it still follows simple, clear rules. It isn’t perfect, or even anywhere near the best, but I think it’s a language that embodies the way the Internet is…

      Just to add – Mr Hammer, I feel the pain of your struggles, my hat is off to you.

  6. Bruno Matos says:

    Hi Eran,

    I know all about that game… Been there… Done that… Leave as well… And I was on an enterprise forum… It’s always the same game. Is not about doing what’s right is doing what will get me money or/and what costs me less.

    In the end of the day, they don’t like sharp and young people who doesn’t care about the business value… They need puppies…

    So, I’m with you ;)
    Cheers,
    Bruno.

  7. Sandro Hawke says:

    Thank you for sharing. Too often, this kind of story never makes it out.

  8. David Welch says:

    Having spent the last few weeks learning about OAuth 2 and implementing / studying it for several providers (Facebook, Google & Twitter [which is really OAuth 1 I guess]) – I have to agree. I’m no security expert but it felt maddening to learn because it seems like each implementation can be implemented in such a vastly different manner. Glad to know it wasn’t just me missing something & rereading documentation over and over.

    I don’t know much about your work, but I appreciate the post & the hard work it sounds like you’ve put in. Sounds like you fought the good fight, but as most engineers know you can only swim up stream so long.

  9. Juampy says:

    OAuth 1.0 is being used successfully by thousands of websites using the OAuth Drupal module (http://drupal.org/project/oauth). I currently maintain that module. Should we discard adding OAuth 2 support? Are there any other alternatives? It would be great if you could chime in at http://drupal.org/node/1591692.

    Many thanks for your had work on this protocol.

    • Eran Hammer says:

      Not sure what OAuth 2.0 support would look like. The hard part would be deciding what to implement and how. I would put it on hold until you find use cases that justify the effort.

  10. Chris Chiesa says:

    A few years ago, some members of Mensa informed me that despite their *individual* genius (or higher) IQs, when a group of them experimentally took an IQ test *as a group* the result came out “dull normal.” There’s just something about group effort that destroys the output. So, even if your “capable, bright, and otherwise pleasant” people WEREN’T just “show[ing] up to serve their corporate overlords,” the end product would *still* have been poor. It appears to be an inevitable feature of the Universe.

    • Jonathan says:

      This may help all of us in terms of being able to think “collectively” instead of as individuals, with nod toward members of Mensa to help solve the reasons behind their results.

      This medium cannot help to justify my explanation because of its limitations, but I will do my best to translate – and yes this idea can be considered “out there”. Take any idea with a grain of salt, but never exclude that idea solely based on your perception of “it is impossible”. If the idea did not have the slightest possibility of existing, why would you be thinking of it?

      James Cameron’s Avatar would be the best reference to date; I am pointing to the physical connection representing their far more global awareness. Another reference are Native Americans and other Tribal Cultures consistently saying how we are all connected, they also refer to non-physical connections.

      Have any one of you went through an experience where – unexplainably – you were able to match the actions of a person you barely knew, or were able to connect without speaking and engage in a game of responsive physical action?

      The central effect being that you still knew who you are mentally, but your mind felt significantly more expanded and you temporarily lost the physical sense of personal identity. Afterwards, you had to reign your mind back in and you were tired for a time following that experience.

      That is the best explanation I can give for your Mensa friends.

      • Michael Brown says:

        I have a simpler explanation;

        Collaboration requires roles for each participant. Without clear defined and well allocated roles, The collaborations value can be affected.

        I suspect if they determined the best candidate for each category of IQ question and gave them the role to answer that question, the ‘system’ would have produced fair better results.

  11. Nick Desaulniers says:

    There will always be hope, wherever you are, until you yourself abandon it. When we fall off the horse, we dust ourselves off and get back on. Don’t give up. The Mozilla Persona project would benefit from your insight. Please, help us make the web a better place. https://login.persona.org/about https://github.com/mozilla/browserid

  12. Travis says:

    Timely post. I have been investigating OAuth the past couple of days for a project implementation. Is your opinion that OAuth 1.0a is still a strong and valid approach? Any other recommendations to use for a standardized approach?

  13. joseph says:

    I’m an intern at a fairly large web service company, and my summer project was to implement an API for one of our services that supported OAuth so that clients could access private user data in a secure way. I found out that no one there was up to speed on the protocol so I read the OAuth 1.0 spec, then read 2.0, and then asked my managers which one I should use. Since it seemed like our competitors were implementing OAuth 2.0, we went with that.

    Our API clients are historically incompetent, so I insisted we use some kind of signature to avoid risking their credentials being compromised. I ended up using JWT authentication. This required a close reading of the OAuth 2.0 Authorization Framework, the OAuth 2.0 Assertion Profile, the JWT Bearer Token Profiles for OAuth 2.0, and finally the JWT spec itself. And too much head-scratching for a college intern.

    So I get that OAuth 2.0 isn’t something you can just plug in and have the kind of security guarantees you’d like to have (because it’s flexible or something). Have I done bad to go the route I went? And if the IETF isn’t the place for the creating the right spec, what is?

    • Eran Hammer says:

      Personally, I wouldn’t touch JWT with a 10 foot pole, or any of those messed up assertion extensions. It is hard for me to say if you picked the right solution without having full insight into your existing infrastructure and requirements. The IETF is clearly the wrong place. The right place is probably github. Someone should write a new protocol and control that work, just like open source.

  14. Kent says:

    Eran,

    Thank you for your efforts over the years. Even if things have taken an ugly direction, please be proud of all that has been accomplished using OAuth (both 1.0 and various drafts of 2.0) that could not have been accomplished otherwise.

    You did mention that you foresee small communities forming to develop new, simple, secure protocols that address targeted use cases especially well. Might I suggest that you’re a logical candidate for such a community to form around? Your reputation precedes you, especially where OAuth is concerned. If you were to finagle the cooperation of Facebook and Twitter, or other heavy-hitters in the dev community (e.g. DHH), it’s a near certainty that you’d be able to achieve what you’d envisioned, AND guarantee widespread adoption. Just something to think about…

    • Eran Hammer says:

      Thanks for that trust. I have considered doing that many time over the past year. But the truth is I’m tired of it and I am not really in a position to lead such an effort given that I am not an active develop working on API authorization security at the moment. I think the key to a successful project, which was a hallmark of OAuth 1.0 is being developed by those who are actually shipping it. But if someone else starts something, I’ll do my best to offer my lessons learned.

      • Kent says:

        I’d be interested in contributing to any such effort. I know many others who would be anxious to contribute as well, but I don’t think that any of us have the requisite notoriety to generate widespread community interest. And that more than anything is my point. Few people have that going for them. You do. :)

      • but I think that perhaps what people need is an architect who could show us what we need to do explain why and give us more or less a good idea to follow, people will just enact those ideas themselves, we don’t really expect you to sit down and write it, next week appearing to say “hey guys! download the javascript code (or whatever) here”

        sometimes, people just need to know what to write, having somebody like you with an instant recognisable history on the subject, people will follow.

        so perhaps you should offer to steward a process, very lightweight, but focused on directing people as opposed to sitting until 4am writing code…..there are lots of people who would do that….we just need a guy to say which direction to walk…and have a place to always come to when needing directions…that github project, could be this place.

      • Tyler says:

        I, too, would be interested in contributing. Your knowledge and expertise could be used to guide multiple developers and quickly produce a product that is everything oAuth 2 was supposed to be. Just because you yourself are not typing the code doesn’t mean you aren’t actively developing the product.

        Think on it. If you do decide to start up a new project, you will have many people who will jump at the opportunity to assist and contribute,

  15. Jesse Emery says:

    Eran,

    Thanks for all the work and trying to fight the fight.

    Why not just go the Douglas Crockford route a la JSON and publish your own “spec”? What you thought OAuth 2 should have been. I, for one, would appreciate that tremendously.

    • Aaron Stone says:

      +1.

      This is a huge bummer. I know I, and many others, in the web space, saw what you were doing and said, “This guy knows what’s up. I can chill out, wait for the spec, and then full steam ahead.” I’ve told this story to many coworkers over the past two years. To hear that people like us backing away left you to the wolves is really a shame.

      Publish OAuth 1.1 directly on GitHub, and if you want an “official” version, run a personal submission to the IETF (assuming it hasn’t been fucked by IPR…)

      • +1 as well! Eran, you are clearly far more rational than your IETF counterparts – it’d be a delight to see your work done in the cleanest way possible, solving a specific problem with a specific protocol in mind. I don’t want to say it’s the best path to redemption – it’s entirely up to you whether even continuing the OAuth brand is worthwhile – but you would be met by many supporters and co-contributors on Github, myself included.

  16. Kevin Turner says:

    My condolences.

    I only hope that this decision gives you the freedom to spend more time on more fulfilling things.

  17. Why does this (SOAP) sound so (WSDL) familiar?

  18. NormF says:

    This is indeed unfortunate. My condolences to you and your bastard protocol.

  19. Eran,

    This is truly unfortunate. I was unable to follow the work on OAuth due to the volume of messages and my work emphasis placed elsewhere, but what you tell me here is definitely not a surprise. Well, almost. I knew for a long time that you had a strong personal desire to see this through. I’m not surprised the IETF process has drained you.

    Perhaps rather than entirely abandon the work, you just step out of the IETF process. Go define that profile of which you speak, or perhaps even step back to 1.0 and define 1.5. Call it 3.0.

    Paul

    • Ladislav Thon says:

      I’d actually propose to align the numbering with the history of HTML and instead of calling it 1.5 or 3.0, it should be called OAuth 5. Half-serious about it :-)

  20. Ian Hickson says:

    I wish you had had that experience before you convinced me to let the IETF get their hands on WebSocket. :-P (Same thing happened there, I ended up getting my name removed from that spec too. What a disaster.)

    • Eran Hammer says:

      All I can say is that I’m truly sorry! The biggest problem with the IETF is that you are completely at the mercy of the working group chairs and area director. When the effort is short, you know who you are dealing with and if there is good chemistry and consensus, it is smooth sailing. But if the effort takes longer than a year, chairs and area directors change and there is no telling who you will end up with. In the OAuth case, the original team was fantastic and fully aligned. But three years later and a whole new set of characters (as well as the move from the application area where it belonged to the security area), it was clear which direction the wind was blowing. It got so bad process-wise that compromises made at huge expense (over hours of conference calls) were later tossed aside by a new chair who was more aligned with the other side of the issue.

      • Ian Hickson says:

        Oh don’t worry, not your fault. Honestly I was as shocked as the next person about how much of a mess that work became. The biggest problem in retrospect was mostly just the assumption from the people trying to sell us the IETF (Lisa, mainly, but also others) that the IETF had the expertise and that the WHATWG did not. But in practice, I think the quality of the feedback we were getting at the WHATWG was of higher quality; the best people in the WG were all already commenting while we were just a WHATWG effort. (We’ve since seen this again for other specs, like the Origin spec and the MIME sniffing spec.)

        Really it says more about how quickly the WHATWG has matured than about the IETF.

        Anyway, WebSocket’s design is worse now (IMHO) than when it arrived at the IETF, but it’s nowhere near as bad a situation as what you’ve described for OAuth.

    • Gareth Collins says:

      What is so bad about websockets?

  21. Jay C says:

    As a developer but not a security expert, I had to read this and provide a summary recently of what changed from 1.0. Even without security experience it was clear to me that the latest version was ridiculous, and I called out a subset of what you do here.

    I’m sorry to hear that your work is now (somewhat) for naught, and thank you for your efforts, sir. Sucks to be in that position.

    As suggested below, it’d be good to see your unadulterated proposed security spec, and hopefully the power of the OSS community could help drive adoption, if so.

  22. Thanks for all effort, but it sounds like an emotional. What you were claiming on just were tactical issues not just technical, for example have you ever reported by a security hole so far or you do not want to disclose here otherwise… Or finally you think that you are responsible for developer’s knowledge, no disclaimer…

    Regards,
    K .N.

  23. Keith Harris says:

    Eran,

    Just wanted to add another Thank You for your hard work.

    One day, we will fix it. One day, people will forget that the internet wasn’t always safe and simple and within their control. When it happens, it will be because of effort like yours. Well done.

  24. Steve Midgley says:

    ++Nick’s comment about joining the Persona/BrowserID work over at Mozilla. To me, this approach is the best concept in web security (and privacy!) I’ve seen yet. I feel like if we’d thought of it back in the mid-90′s we could have solved this once and for all. As it is, we’re dealing with enterprise v. web and losing regularly. I have some hope that NSTIC might not get taken over by enterprise, but it’s a slim hope. Anyway – hop onto the Persona work if you can stand another go, I know they could benefit from your expertise.

  25. [...] and don’t understand what OAuth is or how it works, you should still read OAuth 2.0 and the Road to Hell, because it is a historic moment, and unusually well [...]

  26. slon says:

    I like when specs are made by Google and Facebook rather than “web” theoreticians because it makes them much more practical. OAuth 1 was difficult for an average developer because it required client side signing of MAC tokens. You have to compromise between usability and security.

    • Mark Atwood says:

      Slon, we know when we wrote 1.0 that the client site signing was going to be difficult. Several different approaches to avoid that were proposed, and IIRC we even went pretty far with one, until someone with some good crypto experience weighed in and showed how to break. To do what OAuth 1.0 does, client side HMAC is mandatory. Prove otherwise, and you will deserve the Fields Metal you will get for it.

      • I don’t think the problem is difficulty, that can be abstracted away with libraries, the problemis, nobody stepped up to make those libraries, lots of web developers are quite low skilled in the programming game and if they are the people who are trying to do this encryption “thingy” then I reckon thats why it’s all of a sudden so hard to accomplish.

        to get around this, we just need a simple 1, 2, 3, 4 type library which doesn’t impose so much policy that it’s malleable into different forms and use cases. then the problem disappears.

        but yeah, we need to do encryption on the client, we can’t just sit back and blindly trust TLS, because I don’t know about you, but I don’t trust the person saying they are the person they say they are, I need to force them to prove it. if I need to encrypt stuff on the client to get that, then so be it.

      • Eran Hammer says:

        Actually, OAuth 1.0 does support sorta-bearer-tokens with Plaintext “signatures”. However, no one (other than Yahoo! IIRC) was willing to deploy it. IOW, when given the option between ease of deployment or improved security, most 1.0 vendors chose security. 2.0 doesn’t give them that option anymore.

    • kjs3 says:

      We security types agree there needs to be a compromise between security and usability, but that goes both ways. Your comment displays a inability to comprehend security at a level that should render you prohibited from touching code exposed to the Internet at any time and under any circumstance.

  27. Wandspiegel says:

    Yes, this is sad. And it happens all the time.
    Exactly the same thing happened to the whole Semantic Web effort at W3C. It basically got overtaken by enterprise and now it is of little interest or use to regular web developers.
    reply

  28. Julian Reschke says:

    Thanks for the interesting story. I had a few discussions with the OAuth WG, and I have to agree that it wasn’t too pleasant.

    That being said I think it is unfair to say that what happened here is what happens throughout the IETF. I’ve seen all kinds of Working Groups, some dysfunctional, some working great but slowly (ahem), some working just right (appsawg comes to mind).

    Best regards, Julian

  29. Jay Glasgow says:

    Eran,

    I really appreciate all you did and the challenge you faced. As I have stated before – we try to cram transactional assurance into one of the three A’s, and the best candidate unfortunately becomes Authn since few thing of Assurance as its own “4th A”. But PLOA is gaining some huge traction and we welcome folks that want to work on off-loading Assurance to its own decoupled standards-based architecture.

    I know you already know this stuff, but for the other readers here are some links to get started:

    Video close-up of a demo – watch both parts to get complete story.
    http://youtu.be/d23RquKvKdk
    http://youtu.be/Vfl_6KuvJ2k

    As follow-up on the PLOA videos, these links might prove helpful.
    • PLOA White Paper – http://openidentityexchange.org/sites/default/files/PLOA%20White%20Paper%20-%20v1.01.pdf
    • Donovan PLOA Blog – http://www.attinnovationspace.com/innovation/story/a7779392
    • Drummond Reed PLOA Blog – http://equalsdrummond.name/2012/04/08/ploa-just-what-you-need-to-know/

    =Jay

  30. Mark Hagan says:

    I am glad to know that I am not the only one who has hit the wall that is OAuth 2.0. I gave up on implementing after only a few days of research.

  31. One problem with OAuth 2.0 is the mismatch between the original design goals of OAuth 1.0 and how OAuth 2.0 is being used today by Facebook, Twitter, LinkedIn and other social networks. OAuth 1.0 was an authorization protocol designed for traditional, server-side, pre-Web 2.0, Web applications. Most actual usage of OAuth 2.0 is for social login, which combines authentication and authorization; and we now live in the age of Javascript and native mobile applications. Recent discussions on the OAuth mailing list have reported that mobile application developers are using undocumented Facebook extensions to prevent user impersonation attacks, and that developers who do not know of the undocumented extensions are producing vulnerable implementations.

    What would it take to design from scratch a social login protocol for the age we live in? If you are curious, have a look at http://pomcor.com/2012/06/25/a-protocol-for-social-login-in-the-age-of-mobile/ .

  32. [...] official: Eran Hammer is leaving the IETF OAuth working group. I quote from his blog: “To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web [...]

  33. Dick Hardt says:

    Wow. You insisted on total editorial control, restructured the document several times, dragged your heels on submitting changes, ripped out the bearer token to a separate spec because you don’t like that mechanism, started the MAC spec for a signed token, but then resigned from that spec. Now you resign and blame enterprise use cases for a spec you . Herding the cats to end up with a simple specification is hard, and it is the job of the editor.

    • Eran Hammer says:

      I’m not surprised that you are upset and I am truly sympathetic to your frustration. You have invested much time and energy working on the WRAP specification only to have it put aside in favor of the OAuth 2.0 effort. You were then hired by the lead enterprise player to represent their interest in the working group and give them top billing on the specification. At this point, regardless of the process and how it ended where it is, OAuth 2.0 is not a good specification. Like many others, you bailed out early, as soon as your contract ended. I have spend the last year working on this on my own dime. I take full responsibility for my actions and failures. I think I have made that pretty clear. The outcome is an honest representation of the working group consensus and compromises. It is the best result any other editor could have accomplished given the working group make up. But that doesn’t make it good. Since your name is the one left listed at the top, I will leave it up to you to make up your own mind about the quality of the work and whether you are proud of your affiliation with it. But you can’t have it both ways: it cannot be my fault and a quality specification at the same time. It’s also telling how you are the only person who turned this into a personal attack.

      • Dick Hardt says:

        I also feel your frustration dealing with the IETF “security mafia”. Every IETF meeting I have been at has been frustrating. I was at 3 meetings trying to get what became OpenID work started at IETF. Could not get it started there so helped form the OpenID Foundation to work on identity problems.

        I’m ok with my name being on the spec as OAuth 2.0 is essentially WRAP which is simple to implement and is great for authorization. (btw: WRAP was not put aside for OAuth 2.0 — it IS the core of OAuth 2.0)

        Unfortunate twist as OAuth 2.0 is also used for authentication, something it was not designed for.

        And I agree that client signing is better than bearer tokens — but bearer tokens are good enough for many use cases. The JWT work is hopefully the future of standard tokens.

        • Eran Hammer says:

          I had no issues with the IETF “security mafia”. In fact, my problem was that they were completely absent. The great promise of experts coming to help from the security area was all promises no action. The fact that you consider the JWT work the future is exactly where we disagree. I find it overly complex and focused on enterprise use cases than simple consumer web applications.

        • KatieP says:

          The article is harshed, IMHO. The OAuth protocol itself is sound (I will not call it a security protocol under any circumstance). The interesting part is the bearer token for 2.0, the problem is that there is not a well defined format on what it looks like (whether it should be cryptographic protected, both encrypted/signed, and it should.), and client should authenticated itself again before using the token. 1.0a is not much better, IMHO. The weakness in how the token is protected, but not the protocol inself..

  34. [...] a recent blogpost Eran explains why he withdrew from the OAUTH WG. Having observed the workings of that particular WG [...]

  35. Rodrigo Contreras K. says:

    Hi, Eran:

    I also believe there’s a very real and problematic separation between web and enterprise worlds.
    At the same time I think this is a natural outcome given their different motivations.
    If OAuth2.0 has failed (and I have my doubts) there are still a few alternatives to investigate.

    If I were you:

    1. I would rethink the “bastard protocol” on my own and focus solely on the web, APIs and cool things.
    2. I would name it “NobleAuth1.0″.
    3. I would work on it (seeking own peace of mind) until the following question become easy to answer:

    a) Does it make sense to upgrade from OAuth* to NobleAuth*? (YES, absolutely)
    b) Does it provide better security? (YES, absolutely)
    c) Do I have more than 80% chance to get it all right if I’m a fine programer with zero experience in API integration? (YES)
    d) Is there an alternative to NobleAuth* (Not at all, because of… [endless feature list and comparison table])

    I would put it out there and wait for it to become the defacto authentication standard for web systems.

    Do you (we) really need the IETF to develop (or finish) something great?

    Why?

    Good luck and thanks for this post and all your efforts.

  36. [...] I’ve found myself reflecting more and more on what we actually accomplished,” he wrote in a blog post yesterday. “At the end, I reached the conclusion that OAuth 2.0 is a bad protocol… It is bad [...]

  37. [...] Nach drei Jahren Arbeit am Spezifikationsdokument der Internet Engineering Task Force (IETF) für OAuth 2.0 zieht sich der Redakteur des Dokuments Eran Hammer von dem Projekt zurück. “Die vielen hart erkämpften Kompromisse endeten mit einer Spezifikation, die es nicht schafft, ihre zwei hauptsächlichen Ziele zu erreichen: Sicherheit und Interoperabilität”, schreibt Hammer in einem Blogeintrag. [...]

  38. jf says:

    I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

    oauth is a frigging mess of half-witted insecurity, doesnt matter whether we are talking 1.0 or 2.0, everyone implements it wrong, its taxing on RNG which results in easily guessed tokens, et cetera et cetera. Way to set web security back a decade by being dumbkopfs. Your major security failure will be coming sooner than you think.

  39. Brian McConnell says:

    The IETF process at this point is pretty much guaranteed to produce crap. None of the people there are responsible for shipping a real product within a real time frame. Over 15 years ago, a colleague and I at Nortel tried to introduce a _simple_ protocol for dealing with computer telephone interactions, which could have turned into a nice VoIP protocol. Instead, we got SIP, which, to understate things mildly, is not easy to implement. Simplicity and usability were never what determined winners at IETF, it was all about academic credentials and academic/industry politics. I haven’t attended an IETF meeting in years, and haven’t looked back. Sorry you got burned by the process, wasted, time, etc.

  40. sun says:

    I can only second others. Given your knowledge, experience, and reputation, it sounds only natural and logical to get back to Getting Things Done™. Idealists like us just don’t give up. ;)

    Very naturally, it will definitely take a vacation (or two) as well as quality time with friends and family to get past the frustration and resignation state (of fighting a doomed uphill battle). But based on the mindset you’re communicating, I cannot believe that you’re going to let this truly pass. ;)

    Having looked into both specs, I’d see a high potential for OAuth 1.5 (or 3.0, or 2.0NG ;) ). I fully agree with your analysis/evaluation on the current 2.0 proposal.

    I somewhat disagree with the mentioned general stance of 1.0 being insanely hard to grasp/implement though. IMHO, the problem of 1.0 is two-fold: 1) The spec goes to great lengths in trying to explain how implementations are supposed to work, but is lacking essential details all over the place, and for the average developer, the entire spec is cryptic in the first place. 2) Despite the clear spec, there are/have been multiple reference implementations for individual platforms/languages/interpreters, consisting of very poor and not maintained code for some — which, in turn, made “innocent” developers study the cryptic OAuth spec/implementation in the first place. So while I agree that (primarily the signature part of) the OAuth 1.0 spec was/is complex, I think that the spec itself only takes a minor part of the blame. The major part rather is that an open protocol requires valid, working, and co-maintained quality implementations for most/all platforms.

    In any case, these issues are resolvable. Even more and even better with a revised and improved spec. Even if you’d only take on a minimum “responsibility” of setting up a repository and inviting/shaping a team of maintainers to get the OAuthNG kick-started as a newborn FOSS project, I’m seriously and truly confident you’d have a critical impact on the web’s future. Keep it KISS, incorporate the OpenID Connect and other proposed enhancements; it will prosper.

    First, it’ll take time to get over it though. (no pun intended, I’ve been there before)

    Thanks for your hard work, your patience, and vision!
    sun

  41. [...] 것일까? OAuth 2.0 프로젝트를 주도했던 리더인 에런 해머 라하브가 자신의 블로그에 OAuth 2.0 표준은 나쁜 프로토콜이라는 글을 올리고, 프로젝트와의 결별을 [...]

  42. Ivan says:

    You tried atleast.

  43. Eran Hammer says:

    Some people in these comments and elsewhere have made a big deal out of the fact that they consider OAuth 2.0 to be much easier to implement than OAuth 1.0. But they point they are missing is that these experiences are largely based on proprietary profiles of the early drafts. It is true that 2.0 can be profiled into a simple, straight-forward protocol. But every single one of these profiles is slightly different. Developers have to relearn it for each vendor. And of course, there is no interop across vendors. So sure, 2.0 can be simpler, but so can any other home-brewed alternative. If ease of development comes at the expense of interop and baseline security, what’s the point of standards?

    • > It is true that 2.0 can be profiled into a simple, straight-forward protocol. But every single one of these profiles is slightly different.

      If interoperability between profiles is the only real issue, it’s a bit dramatic calling the standards process “the road to hell”. Interoperability issues can be explicitly fixed as a next step the way the SOAP guys did with WS-I(nteroperability) Basic Profile, which pretty much put that issue to rest (pun not intended).

      I can empathise that the “design by committee” approach has produced something less elegant than what any single committee member would have liked, but pragmatically, can we now start to use this to build real apps? What I’m reading from your post even after all the criticism is that it will indeed be possible, once interop is fixed.

  44. Craig Wright says:

    Hi Eran, I cannot say I am surprised to read this given our discussion at last year’s OSCON, but it is disappointing all the same. Thanks for the effort, and I hope the next or current project has a more satisfying conclusion.

  45. Target says:

    Hiya,

    Why doesn’t someone just take the protocol that worked, and the features the worked, and then start a new protocol-based oAuth fork?

  46. ¨Wyn Williams says:

    I feel your pain :( Thanks from all of us for trying to hold the fort, your contributions are very much appreciated.

    I was considering 2.0 for a large scale project but now will not as we can afford the risk (and we have some pretty good security peeps) hopefully this obvious cluster fu** will somehow get pulled back to reality but from your post it looks doubtful.

    Onwards and upwards Eran !

  47. [...] weniger nützlich, unvollständiger und vor allem weniger sicher”, schreibt Hammer-Lahav in einem Blogeintrag. “Ich habe meine Rolle als führender Autor und Bearbeiter aufgegeben, meinen Namen aus dem [...]

  48. [...] Hammer-Lahav heeft zijn naam inmiddels al van de nieuwe standaard gehaald en legt in een blogpost uit waarom OAuth 2.0 zijn inziens een ‘road to hell’ [...]

  49. [...] Nach drei Jahren Arbeit am Spezifikationsdokument der Internet Engineering Task Force (IETF) für OAuth 2.0 zieht sich der Redakteur des Dokuments Eran Hammer von dem Projekt zurück. “Die vielen hart erkämpften Kompromisse endeten mit einer Spezifikation, die es nicht schafft, ihre zwei hauptsächlichen Ziele zu erreichen: Sicherheit und Interoperabilität”, schreibt Hammer in einem Blogeintrag. [...]

  50. Torsten Lodderstedt says:

    Hi Eran,

    as already indicated on the list. I think you made a great job as editor of the base spec. Sadly, your and mine assessment of what we have achieved so far seem to contradict. At Deutsche Telekom, we have implemented proprietary token protocols as well as OAuth 1 and 2. Based on our experiences I would conclude, OAuth 2 is by no means perfect but better in terms of interoperability, usablility, and scalability than everything else we had before. And our security team is happy as well.

    As editor of the OAuth 2.0 security document I would be eager to know how you came to the conclusion OAuth 2 is less secure than OAuth 1. What problems of practical relevance do you see? And what evidences do you have?

  51. Joseph Werle says:

    This was beyond inspiring and shows how one can truly be humble about a project they started and knowing when to throw in the towel. It is another spec gone wild and to far out to come back. Being the creator and recognizing that is truly a power many don’t have..

  52. Mojo Jojo says:

    The 2.0 draft is roughly double the number of pages of 1.0.

    Double the page count for a revision is usually a sign of administrivium and committee-itis, of anything other than “a revision of an existing protocol”.

  53. Bob Denny says:

    This scenario has been repeated ad nauseum as long as I have been associated with computing and communications (my whole life, 40+ years as a developer and I am current and still do it full time as a profession). Having been through a similar evolution in the area of a protocol for distributing transient astronomical events for followup, I can say that the collective farming model of engineering is flourishing. Under this model, people are more concerned with form than results, and everyone has to urinate on the fire hydrant. Compare with engineering where elegance, results, prototyping, and incremental refinement in the real world, are king. Reading your story was a trip down many memory lanes. I’m sorry for your loss.

  54. [...] the news today was that Eran has decided that OAuth2.0 is a bad specification and wants nothing to do with it.  Its kinda a bit too late to complain about OAuth2.0.  Its out [...]

  55. [...] authority, you’ll get to the end of it having achieved nothing.Hammer put everything down in a blog post.  It’s a long one, but well worth a read.  Suffice to say, stick with OAuth 1.0.Share [...]

  56. [...] OAuth 2.0 and the Road to Hell « hueniverse. [...]

  57. [...] In zijn blog haalt Hammer, een van de ontwikkelaars van OAuth 1.0, de nieuwe versie van het autorisatieprotocol hard onderuit. Hij heeft in eerste instantie meegewerkt aan versie 2.0 maar heeft inmiddels zijn handen ervan af getrokken. “Het nieuwe protocol is zo slecht dat ik er niets meer mee te maken wil hebben.” [...]

  58. James Henstridge says:

    Looking through your complaints, could you elaborate why the “unbounded tokens” issue is bad? It isn’t clear what benefit there is to the client in providing its ID in every request, and the server obviously knows who it issued the token to (and could encode the client ID in the token if it really wants). And the OAuth 2 design sounds like it would let the service limit knowledge of client secrets to the code that issues tokens rather than every piece of code that checks signatures needing those secrets.

    As for token types, OAuth 1 has PLAINTEXT and HMAC-SHA1, while OAuth 2 has bearer and HMAC token types. While there are obviously places where OAuth 1 HMAC-SHA1 signatures are suitable where OAuth 2 bearer tokens aren’t, that isn’t really a fair comparison. Comparing PLAINTEXT signatures with bearer tokens makes more sense, and I think bearer tokens come out ahead. They do away with unnnecessary timestamps and nonces, which only add failure modes rather than extra security (e.g. does it really matter if a client’s clock is wrong when they’re using PLAINTEXT auth?). Reserving that complexity for HMAC tokens seems like a sensible improvement.

    As for token expiration, OAuth 1 clients need to deal with the possibility that their token may become invalid some time down the track (possibly because it expired). It isn’t obvious to me that formalising this case is a bad thing.

    I haven’t read the grants section in detail, but I will say that I’ve seen people get confused about the distinction between OAuth 1 request tokens and access tokens. So using different terminology for these different phases of the authorisation process seems somewhat sane. Documeinting a method to implement “desktop username/password auth” (not via web browser) also seems like a decent addition too. When we wanted to do this with OAuth 1, the spec offered no guidance but we ended up with something that is roughly equivalent.

    I haven’t used OAuth 2 in anger yet, so perhaps I am wrong about some of this. But from my use of OAuth 1, the changes don’t all obviously sound like bad ideas.

    • Eran Hammer says:

      The changes are all valid technical choices. They are not necessarily bad on their own. What they do is make OAuth better at scale by stripping down protections under the assumption that other layers will take care of things. The problem is, this is not how modern security protocols should be designed with layered security. Because the token is unbounded, all you need is to steal just one thing, and is TLS wasn’t configured correctly, you’re toast. Also, with bounded tokens, a client must share its own credentials to share tokens around. This is a good protection against an approved application leaking your data.

      • James Henstridge says:

        I realise that if the transport layer security fails and a bearer token is leaked, it can be used as credentials for other requests. But the same is true for OAuth 1 PLAINTEXT signatures: while you need four values to generate the signature instead of just one, those four are always presented together so it isn’t clear they offer any more security than a single value. If anything, it looks like the bearer token would offer slightly more security since such a leak won’t expose the client ID and secret.

        Now of course OAuth 1′s HMAC-SHA1 signatures offer better security in this situation. But that point seems moot given that the OAuth 2 specs document HMAC tokens. I realise that HMAC tokens only use two values to construct the signature while OAuth 1′s signatures use four, but those four values in OAuth 1 signatures may as well only be two given the way they’re used (consumer_key+token and consumer_secret+token_secret).

        It really doesn’t look any worse than the situation of OAuth 1 documenting multiple signing algorithms.

        • Btw, side note reading the OAuth 2 MAC spec and OAuth 1′s HMAC-SHA1 section I noticed something.
          OAuth 1 signatures include the POST body for form-urlencoded data while OAuth 2 MACs offer no protection to the entity-body.

          • Eran Hammer says:

            True. It ended being an encoding/decoding nightmare. Instead, MAC offered an ext parameter for stuff like body hash that’s application specific.

            • Yeah… the way decoded query parameters was worked with and then encoded a specific way in a specific order was a mess.

              I’ve been contemplating how I’d write a spec like this.
              I looked over some of the scripting languages. It looks like practically every language seems to give the script access to the raw post body (with the exception of things like multipart/form-data which you would stream and wouldn’t want a signature for).

              Under that vein my thought was to have two modes for the mac. One that excludes the body and another that includes the body in the signature (or rather includes a hash of the body into the signature, so large data can be verified after processing without requiring double storage of the original data).

              A client would decide what mode to work in depending on the data they are working with. Large contents and multipart documents you’d use the mode without the body. And with form-urlencoded, json, xml, etc… post APIs you would sign the body.

        • OAuth 1 emphasized signatures while OAuth 2 seems to emphasize bearer tokens. So there is a an issue with perception of the spec to consider. Everyone who used OAuth 1 used signatures. While everyone who has used OAuth 2 have only used bearer tokens and seem to consider this security degradation as and advantage.

          That aside technically yes we should probably compare OAuth 1 HMAC to OAuth 2 MAC tokens.

          On the topic of unbounded tokens combined with (H)MAC signatures.
          Assuming the TLS connection fails (the client developer decides to disable it, etc…) doesn’t the fact that the access token is unbounded leave it open to abuse by the MITM?

          In an OAuth 1 signature the client secret and token secret are used together. While an OAuth 2 signature just uses the token secret.
          Because HMAC is used the token secret is safe in both cases for requests using the token.

          But what about the token endpoint itself? If TLS protection has failed then isn’t it likely that the MITM is capable of watching you make the grant request to the token endpoint asking for the token?
          So even though we’re using signatures doesn’t this mean that the MITM has likely gotten ahold of the MAC token and knows the secret already?

          OAuth 1 uses the client secret and the token secret. So you shouldn’t be able to do anything malicious since the MITM doesn’t know the client secret. But OAuth 2 doesn’t use the client credentials anywhere (until a refresh) after the mac credentials are handed over.

          Doesn’t this mean that OAuth 1 bounded tokens protect the resource endpoint from attack while OAuth 2 unbounded tokens allow a MITM to freely abuse the access token maliciously?

          • James Henstridge says:

            In the current draft of the OAuth 2 spec, token types are introduced in Section 7.1. It mentions bearer tokens with a link to the bearer token spec, and then mentions MAC tokens in the very next paragraph along with a link to that spec. So are services picking OAuth 2 bearer tokens that different to ones picking OAuth 1 PLAINTEXT signatures?

            My point wasn’t that bearer tokens are safe against TLS failures, but rather that they are no worse than the OAuth 1 PLAINTEXT signatures. Now if there are defficiencies in OAuth 2 MAC tokens, that would seem to be a separate matter.

            It does seem that MAC token authentication doesn’t protect the request body. It looks like the first draft included a body hash in the signed request string, but was removed in the current draft. I’m not sure why that was changed, but it looks like a service could reintroduce it through the “ext” parameter if it chose to. That’s not particularly great for interoperability and not a fatal problem.

            The other issue is that the entire MAC key is sent over the wire when the token is issued (a problem if the TLS fails and the connection is intercepted). I agree that OAuth 1 has a leg up here, since a portion of the MAC key is never transmitted over the wire (the client secret)..

            Surely it would be better to concentrate on these issues directly rather than making strawman comparisons with bearer tokens.

        • Eran Hammer says:

          First, very few providers implemented plaintext. OAuth 1.0 gave them three options: light, medium, and heavy. It is true that if you capture a plaintext request it is game over, but there are many other scenarios other then the ability to listen to the entire channel. In 1.0, you provision a token to a specific client, and they you enforce that restriction. You can’t do that in 2.0 as specified. As to HMAC in 2.0, I’ve been trying to get it supported for three year but at this point cannot predict if the working group will finish that effort. Less than 5 people showed interest last time I checked.

  59. [...] fact that he would no longer be the editor of the standard in a harshly critical blog post entitled OAuth 2.0 and the Road to Hell where he made a number of key criticisms of the specification the meat of which is excerpted [...]

  60. [...] sound of a door slamming last week was Eran Hammer storming out of the OAuth standardization process, declaring once and for all that the technology was dead, and that he would [...]

  61. Rosco says:

    You could work on a new spec, call it NAuth for “New Auth – one step ahead of OAuth”! ;)

  62. [...] this post is obviously triggered by the recent damnation of OAuth 2.0 by the (former) spec editor Eran Hammer, it's not directly related to it. These are my thoughts [...]

  63. [...] du W3C de la part de Ian Hickson fait écho à l’annonce récente d’Eran Hammer, qui se retire de la liste des rédacteurs de la spécification OAuth 2.0. Eran, après avoir été l’un des acteurs principaux de la standardisation OAuth (utilisée [...]

  64. SV says:

    This reminds me of the XSLT/XPATH 2.0 debacle at the W3C. James Clark, the editor and driving force behind XSLT/XPATH 1.0 dropped out of the effort early on – I and my colleague at Novell hung in there for several years before dropping out when it became clear that it was going to be several more years before it would be done and that the result would be way several orders of magnitude more complex than 1.0, which was already very difficult for many of our constituency to understand.

  65. [...] friend sent me Eran Hammer’s post about his stepping away from the OAuth 2.0 spec. My friend seemed to think this was an indictment of OAuth, but I think it’s more an [...]

  66. [...] ways of implementing, but seems to now be standardising under the OAuth protocol, which is great. Even if version 2.0 has some issues, this kind of thing needs to be [...]

  67. [...] in any standards-making body bursts out into the open.  When Eran Hammer wrote his blog post ‘OAuth 2.0 and the Road to Hell’, he was articulating the frustration of many developers who see standards  evolve in the direction [...]

  68. [...] So, jetzt muss ich mich auch mal zu der OAuth Geschichte äußern! Wer nicht weiß warauf ich anspiele, der sollte sich zuerst mal Eran Hammers Blogpost durchlesen: OAuth 2.0 and the Road to Hell. [...]

  69. Jeff Brandt says:

    Great article but I hate to hear the news. I attended a Healtcare meeting on a proposal to use OAuth2 and it sound like overkill.

    It reminds me of all my work in the 90′s on SET with IBM (Secure Electronic Transactions), a very heavy secure protocol. All of a sudden SSL came on the table and SET died. SSL was “good enough”

    Best of luck,
    Jeff

    • KatieP says:

      Yes, that seems to be the case. OAuth is designed to solve a simple problem in a complicated universe. And there were many complicated tries on this WS-*, SAML (Federation).. I put this down as the growing pain of this simple protocol.

  70. Again, the overall tone of the comments is correct. The criticism is IDENTICAL to the criticism levied at X.509. X.509 in its v3 incarnation was a framework – of a 1000 incompatibilities and a notation for easily manufacturing more.

    Of course, eran also notes the solution to a framework-notational spec (which occured to X.509, another infamous framework-notational spec). Someone comes along and makes a super-dominant profile, probably webby, that is so compelling that everyone else needs to connect. For X.509, this was of course the netscape SSL server certificate. Only in windows do you see, enterprise-like the 24 other profiles that hardl anyone anyone ever sees (and actually rarely bitch about).

    NOw the number of of folks who bitch about X.509 is larger by 2 orders of magnitude than those who bitch at OAUTH v2 – mostly because its just been around for 20 years longer and the bitch-fests just get more political, more religious, and more hatred filled each year – as function of _reach_ (didnt you know X.509 today is still an ISO/Soviet/AlQueada plot to take over the US and impose 8bit micros on the world, rather popular in 1986 when X.509 first came about?) But also note, WHEN THE FRAMEWORK IS RIGHT, you are stilling talking about a silly bit format 20 (yes 20) years later.

    Thats what standards are for. They are there to be still around 20 and for X.509 30 years later, having evolved piecemeal for the last 15 of them using the notational-framework – evolving subtly and dinosaur like as more and more economic dependence is placed on their shoulders.

  71. Raymond Forbes says:

    yes yes and more yes! thanks so much for posting this. i watched the oauth 2.0 spec being developed after learning oauth 1.0 and i was vastly disappointed in the decisions that were being made. it really felt like they were sacrificing security for ease of use, which is never a good thing for a security spec.

    -r

  72. [...] July 27, 2012 by ekivemark I just read a CNet article about Eran Hammer-Lahav leaving his role as Lead of the OAuth2.0 specification. Eran had put 5 years of effort in to developing OAuth 2.0. It seems to be another story of [...]

  73. Attaullah Baig says:

    There was no such thing as “simplicity”

  74. Not much to add to all of this discussion.

    However I want to thank you for your time dedicated throughout these years. Your geek efforts have made a huge difference.

    The guide was also awesome and very well explanatory of how OAuth 1.0 works.

    Now…as Mr. Wayne would say…”Eran Hammer, why do we fall sir? So we might learn to pick ourselves up”

    I hope you keep producing valuable things for the community.

    Best,
    Andrés

  75. [...] recent sound of a door slamming was Eran Hammer storming out of the OAuth standardization process, declaring once and for all that the technology to which he gave so much of [...]

  76. [...] controversy (for example, check out what OAuth lead author Eran Hammer has to say about it here: http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/ .  In fact, some SharePoint experts have gone on the record stating that security for Apps [...]

  77. [...] the details of how something is built that makes you realize, not everything is what it seems to be:http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/ After reading the article…go to the last part…I tweeted that all not too long ago.  I did [...]

  78. [...] think these are both useful steps in a good direction, but neither solves the problem: open protocols aren’t immune from death-by-committee, basically all OStatus users rely on a free but commercial server such as identi.ca to host their [...]

    • Nils says:

      I’ve been a consultant for the best part of my life. I’ve been implementing access management solutions, SSO solutions, federated SSO and now I’m at the dawn of setting op a federated authorisation architecture for a big corporate organisation. I have not read all of these posts, but can somebody tell me what to do know. I need to provide the answer to the customer tomorrow…sic