Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).

Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

—Jamais Cascio

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Have you published a response to this? :

Responses

Chris Coyier

Jeremy Keith attempts to make this distinction, using Angular and the concept of “enterprise” software as the catalyst. “Of the web”: Built of fundamental principles of the web. Universal access. “On the web”: The web as a delivery mechanism. The owners dictate use. Jeremy, who has been banging the progressive enhancement drum since forever, is predictably an “of the web” kinda guy. He only takes issue with the fact that other folks might be forced into working against their principals because of an above-their-head software choice. I’m slightly less concerned. It actually makes me feel better thinking of things in those terms. While I feel more aligned with the fundamental-principles thinking, I’ve never held it against any website for dictating how it can be used. There is also gray area here. Every website I’ve ever worked on had to make choices about what it can feasibly support, because business.

Aaron Gustafson

Last week Peter-Paul Koch (PPK) posted a lengthy treatise on why browsers should stop “pushing the web forward”. I thoroughly enjoyed the read and agree with him on a number of points. I also agreed with the well-articulated responses from Jake Archibald (of Google) and Bruce Lawson (of Opera). I guess I’m saying I see both sides. Like Chris Coyier, I live in a world filled with varying shades of grey rather than stark black & white.

New Features vs. Interoperability

One of the arguments PPK makes is against browsers competing on features. It really rang true to me:

I call for a moratorium on new browser features of about a year. Let’s postpone all completely new features that as of right now don’t yet work in any browser.

Browsers are encouraged to add features that are already supported by other browsers, and to write bug fixes. In fact, finding time for these unglorious but necessary jobs would be an important advantage of the moratorium. As an added bonus it would decrease the amount of tools web developers need.

Back in January, I wrote about how I was excited by Microsoft’s announcement of “Project Spartan” (now “Microsoft Edge”) and it’s focus on interoperability. Interoperability’s a long word, so I’m gonna go with “interop” from here on out.

I was not on the Microsoft payroll at the time, but I was still stoked to see their focus on interop in the new rendering engine. They’d even gone, in my humble opinion, above and beyond in this regard—aliasing Webkit’s experimental, legacy CSS syntaxes to their modern, standards-based implementations. This ensured poorly coded sites worked well in their browser and didn’t penalize users for a designer’s mistake. Talk about being a good web citizen!

Of course, Microsoft Edge wasn’t the first browser to do this. IE 7 Mobile implemented -webkit-text-size-adjust back in 2010. Opera and Mozilla also felt the pressure and eventually implemented -webkit- vendors prefixes in versions of their browsers. It’s a weird world when one browser vendor is forced to implement another’s proprietary syntax just to make the web work, but it’s the sad state of things in our full StackOverflow development world.

With the move away from vendor prefixes in CSS to “feature flags”, you’d think this sort of thing would be behind us, but it’s not. Karl Dubost, of Mozilla, recently bemoaned the implications of Apple’s latest vendor prefix silliness on his blog. In that post, he made a keen observation:

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

I completely understand PPK’s desire for browsers to apply the brakes a bit and focus on interop. With new features being added to “the web”—but in reality only browser X, Y, or Z—on the regular, without guaranteed interop, it feels like we’re stirring up the browser wars again. All the new shiny is exciting, but I lived through the browser wars the first time and they sucked for everyone involved. Web standards helped us get everyone on the same page and brokered what we’d hoped was going to be a lasting peace.

Now I’m not sure I agree with applying the brakes for a specific amount of time, but I do see great value in prioritizing interop over new features. And when browsers do implement new features, they should definitely put them behind feature flags (or some similar opt-in) to ensure we—the web development community—don’t start relying on some fancy new feature before it’s been vetted. Feature flags are awesome because they allow me, a designer, to experiment with a new technology in my own browser without affecting things for everyone else on the open web.

We used to think vendor prefixes were enough of a deterrent to using a particular experimental CSS property or JavaScript method. Sadly that’s turned out to not be the case. I would bet good money on the sad reality that 80% of the working web designers out there don’t understand that -*- means “experimental” or even “proprietary”. We—the web design authors, speakers, educators, and other influencers—did a shitty job landing that message with the industry as a whole. But even if we’d hounded people about it, it probably wouldn’t have mattered: Vendor-prefixed properties work. And now they work even in browsers they were never meant to.

So, here’s what I’d love to see browser vendors do:

  1. Prioritize interop over new features. Don’t halt development on new features, just put them on the back burner so the rising tide can, as they say, lift all the ships. Web developers and end users all benefit when there’s feature parity and stability among browsers.
  2. Put a moratorium on vendor-prefixes. They are not generally understood to be experimental. If you feel you must use a vendor prefix, ensure it’s only enabled by a corresponding feature flag.
  3. Use feature flags (or some similar opt-in) to enable developers to test experimental features in their own browsers, but also to ensure they aren’t available on the “open web” before they’re ready.
The Web vs. Native

PPK has harped on this a few times. There is currently a palpable tension between “native” and “the web”. It’s driving most of the new features in the web “platform”1 and it’s giving many of us old-timers a touch of angina.

The reason is simple: The web was created as a massively interconnected document repository. A wealth of knowledge dependent on the hyperlink and the URL. The web was (and indeed still is) stateless by default, meaning it has no idea who you are or what you’ve done from request to request. This is very egalitarian: everyone has access and anyone can contribute.

As more businesses moved online, the web became necessarily transactional. Suddenly websites needed to know information about your “state” so they could sell you things and track your movements around their site and the rest of the web. With the advent of cookies and the Common Gateway Interface (CGI), a web server could adjust the content it sent in response to a request, based on what it knew about you and what you were doing.

Taking this simple capacity a step further, it became possible to write actual software on the web. Content management systems were probably the first big chunk of software to move online, but more soon followed. JavaScript came along and allowed us to add a bit of logic to the client side, reducing our reliance on round-trips to the server. Then we got Ajax and the whole JavaScript world exploded. We now have web-based photo editors, integrated development environments (IDEs), games, and more, all reliant on JavaScript’s ability to interact with the browser and manipulate what the user sees in real-time.

There were earlier machinations certainly, but the last ten years have seen the biggest push to bring more traditional software-like interactions to the web. Dozens of organizations, big and small, are trying to make their mark creating the framework for building these “next-generation” web-based app experiences. Honestly, I don’t have a problem with that. I don’t really have an interest in client-side frameworks, but I don’t have a problem with them either… provided developers who wish to bring their programming talents to the web take a little time to learn about the medium.

If you don’t take the time to understand how the web works, you’ll spend half your time cursing it and the other half trying to work around the things that frustrate you (which you will probably write off as “poorly designed” or “ill-conceived”). If you don’t understand how the web works, you’ll build fragile experiences that collapse like a house of cards when any one of your many dependencies—the network, JavaScript, some particular element or browser API—isn’t available. If you don’t understand how the web works, what you build will simply be on the web, not of it.

I don’t particularly care much about bringing “native like” “60fps” experiences to the web. It’s not that I don’t write software (I do), I just don’t really care if something I make for the web feels like a piece of installed software. I’ll do everything in my power to ensure my users have a great experience, but I know that each person’s experience will be a little bit different and I no longer feel the need to enforce my will on their experience. I’d rather create many ways for someone to interact with the things I build and hope one or more of those work well for whoever happens by and whatever device they happen to be using.

Native software and the web have always co-existed. We had installed software on computers long before the web even existed and we will continue to have installed software for as long as there are computers. Some software will move to the web if it makes sense for it to do so. Other software will remain native. Either option could be right or wrong depending on what you are trying to do. For instance, I would never personally write a photo editor in the browser because image processing requires a lot of memory and CPU cycles. Putting it in a browser moves it one more level away from the hardware. Abstraction eases development, but it invariably increases overhead and reduces performance.

Traditional software and the web can and should co-exist. They also can and should continue to inform one another. Ultimately, that will help us better serve the needs of our users, however they use our creations.

Change vs. Stagnation

Underpinning this whole “native vs. web” thing is, I think, a feeling many of us old-timers have that our web—the web we grew up building—is slipping away from us. We cling to the idea of the web as an open platform2 for people to share their thoughts, passions, and cat photos. We like the web as it was originally. We like the web as we made it.

The web is changing. In some ways it’s changing for the better, in some ways for the worse. It’s a far different beast today than when Tim Berners-Lee typed that first <HEADER> and you can certainly do a lot more in the browser now than you could when I first picked up HTML. But I don’t think halting progress on the web is desirable.

As Jake points out in his response, stagnation is not a good policy. Stagnation pretty much killed BlackBerry. It also led to a lot of developer frustration in the guise of IE 6.

Change is not inherently bad. It’s pace can be quite frustrating at times, though. PPK certainly seems to be feeling that way about its speed now just as Alex Russell lamented it’s plodding progress back in 2007. But when you take a step back, especially with a historical perspective, you see the changes are cyclical in many ways. The bandwidth issues we dealt with during the dial-up era are with us again in the form of mobile networks. The lessons we learned building a web for 640x480 screens are equally applicable in a world of wearables. And the text-based interactions we created in the very early days will serve as a template as we move boldly forward into the realm of voice-driven user experiences.

Cutting Edge vs. Craft

In his post, PPK also complained that we’re simply getting too many new features on the web, which makes it hard to keep up. More than that, however, it makes it hard to truly come to a deeper understanding of how these different pieces work. To really hone our craft. In other words, it’s becoming harder to be an expert generalist.

Jake and Bruce completely get this, as do I. Lyza Danger Gardener has even given an amazing talk on the topic. The sheer volume of new drafts, specs, and concepts (not to mention tooling options) is overwhelming. I’m sure I don’t know half of the features that are in the HTML5 spec, let alone the umpteen CSS3 modules. I probably never will. And I’m ok with that. I’ll pick and choose the bits I’m interested in playing around with and find ways to integrate them into my practice a little at a time. That’s how we learn. That’s how we’ve always learned.

To assuage PPK’s fears, however, I would argue that there are a lot more of us working on the web now than there were in the more leisurely paced days he remembers so fondly. And we’re sharing what we learn. Whether driven by altruistic desire to spread knowledge or an interest in rockstar-like fame in the industry (or even a smidge of both), it doesn’t really matter how it happens—the fact is that it does. We learn and we share. And the tools we have today make it even easier to do so. Not only do we have the usual magazine and blogging outlets, but we also have CodePen and JS Bin and Github and more.

We each have the capacity to research the hell out of one specific area of web design and be the conduit for that knowledge into the web design hive mind. Look at Sara Soueidan with SVG or Rachel Nabors with CSS Animation or Zoe Mickley Gillenwater with flexbox. Individually, we will never be able to learn it all, but collectively we can. Together, we can tackle any problem by accessing what we need to know when we need to know it.

Developer Convenience vs. User Needs

Another angle in this very dense piece from PPK was around tooling and polyfills:

We get ever more features that become ever more complex and need ever more polyfills and other tools to function—tools that are part of the problem, and not of the solution.

The whole “polyfill it and move on” movement has him a little annoyed. I share his sentiment. I don’t think a JavaScript-based solution should be considered “good enough” for interop. JavaScript is not guaranteed. Moreover, JavaScript implementations are also never going to be as fast as a browser implementation. If browsers want to pick up a polyfill and implement it behind the scenes, that’s fine because it will run faster, but loading up our websites with potentially megabytes worth of polyfills in order to use new “standards” seems ludicrous.

As an industry, we are doing an awful lot of navel gazing. We are spending more time solving our own development problems (legitimate in some cases, fabricated in others) by throwing more and more code at the problem. As a consequence, our users are paying the price in slower sites, heavier web pages, poor performance, and bad experiences (or no experience). And, on top of that, we’re solving our problems not their problems.

All is not lost

We are designers. Design is about solving problems for our users, not creating new ones for them. Whether we are writing code, sketching interfaces, authoring copy, curating content, or building servers, we should make each and every decision based on what will benefit our users. If it means we can’t use some shiny new technology, so be it. We can still play with the new stuff in our own browser, on our personal sites, and on CodePen. We can learn about them in our own experimentation and share that knowledge with the rest of our industry. We can improve our craft. The web can get better.

  1. I think I just threw up in my mouth a little. I hate using that word when speaking about the web, but there it is.

  2. In the other sense of the word.

Aaron Gustafson

Last week Peter-Paul Koch (PPK) posted a lengthy treatise on why browsers should stop “pushing the web forward”. I thoroughly enjoyed the read and agree with him on a number of points. I also agreed with the well-articulated responses from Jake Archibald (of Google) and Bruce Lawson (of Opera). I guess I’m saying I see both sides. Like Chris Coyier, I live in a world filled with varying shades of grey rather than stark black & white.

New Features vs. Interoperability

One of the arguments PPK makes is against browsers competing on features. It really rang true to me:

I call for a moratorium on new browser features of about a year. Let’s postpone all completely new features that as of right now don’t yet work in any browser. Browsers are encouraged to add features that are already supported by other browsers, and to write bug fixes. In fact, finding time for these unglorious but necessary jobs would be an important advantage of the moratorium. As an added bonus it would decrease the amount of tools web developers need.

Back in January, I wrote about how I was excited by Microsoft’s announcement of “Project Spartan” (now “Microsoft Edge”) and it’s focus on interoperability. Interoperability’s a long word, so I’m gonna go with “interop” from here on out.

I was not on the Microsoft payroll at the time, but I was still stoked to see their focus on interop in the new rendering engine. They’d even gone, in my humble opinion, above and beyond in this regard—aliasing Webkit’s experimental, legacy CSS syntaxes to their modern, standards-based implementations. This ensured poorly coded sites worked well in their browser and didn’t penalize users for a designer’s mistake. Talk about being a good web citizen!

Of course, Microsoft Edge wasn’t the first browser to do this. IE 7 Mobile implemented -webkit-text-size-adjust back in 2010. Opera and Mozilla also felt the pressure and eventually implemented -webkit- vendors prefixes in versions of their browsers. It’s a weird world when one browser vendor is forced to implement another’s proprietary syntax just to make the web work, but it’s the sad state of things in our full StackOverflow development world.

With the move away from vendor prefixes in CSS to “feature flags”, you’d think this sort of thing would be behind us, but it’s not. Karl Dubost, of Mozilla, recently bemoaned the implications of Apple’s latest vendor prefix silliness on his blog. In that post, he made a keen observation:

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

I completely understand PPK’s desire for browsers to apply the brakes a bit and focus on interop. With new features being added to “the web”—but in reality only browser X, Y, or Z—on the regular, without guaranteed interop, it feels like we’re stirring up the browser wars again. All the new shiny is exciting, but I lived through the browser wars the first time and they sucked for everyone involved. Web standards helped us get everyone on the same page and brokered what we’d hoped was going to be a lasting peace.

Now I’m not sure I agree with applying the brakes for a specific amount of time, but I do see great value in prioritizing interop over new features. And when browsers do implement new features, they should definitely put them behind feature flags (or some similar opt-in) to ensure we—the web development community—don’t start relying on some fancy new feature before it’s been vetted. Feature flags are awesome because they allow me, a designer, to experiment with a new technology in my own browser without affecting things for everyone else on the open web.

We used to think vendor prefixes were enough of a deterrent to using a particular experimental CSS property or JavaScript method. Sadly that’s turned out to not be the case. I would bet good money on the sad reality that 80% of the working web designers out there don’t understand that -*- means “experimental” or even “proprietary”. We—the web design authors, speakers, educators, and other influencers—did a shitty job landing that message with the industry as a whole. But even if we’d hounded people about it, it probably wouldn’t have mattered: Vendor-prefixed properties work. And now they work even in browsers they were never meant to.

So, here’s what I’d love to see browser vendors do:

  1. Prioritize interop over new features. Don’t halt development on new features, just put them on the back burner so the rising tide can, as they say, lift all the ships. Web developers and end users all benefit when there’s feature parity and stability among browsers.
  2. Put a ban on vendor-prefixes. They are not generally understood to be experimental. If you feel you must use a vendor prefix, ensure it’s only enabled by a corresponding feature flag.
  3. Use feature flags (or some similar opt-in) to enable developers to test experimental features in their own browsers, but also to ensure they aren’t available on the “open web” before they’re ready.
The Web vs. Native

PPK has harped on this a few times. There is currently a palpable tension between “native” and “the web”. It’s driving most of the new features in the web “platform”1 and it’s giving many of us old-timers a touch of angina.

The reason is simple: The web was created as a massively interconnected document repository. A wealth of knowledge dependent on the hyperlink and the URL. The web was (and indeed still is) stateless by default, meaning it has no idea who you are or what you’ve done from request to request. This is very egalitarian: everyone has access and anyone can contribute.

As more businesses moved online, the web became necessarily transactional. Suddenly websites needed to know information about your “state” so they could sell you things and track your movements around their site and the rest of the web. With the advent of cookies and the Common Gateway Interface (CGI), a web server could adjust the content it sent in response to a request, based on what it knew about you and what you were doing.

Taking this simple capacity a step further, it became possible to write actual software on the web. Content management systems were probably the first big chunk of software to move online, but more soon followed. JavaScript came along and allowed us to add a bit of logic to the client side, reducing our reliance on round-trips to the server. Then we got Ajax and the whole JavaScript world exploded. We now have web-based photo editors, integrated development environments (IDEs), games, and more, all reliant on JavaScript’s ability to interact with the browser and manipulate what the user sees in real-time.

There were earlier machinations certainly, but the last ten years have seen the biggest push to bring more traditional software-like interactions to the web. Dozens of organizations, big and small, are trying to make their mark creating the framework for building these “next-generation” web-based app experiences. Honestly, I don’t have a problem with that. I don’t really have an interest in client-side frameworks, but I don’t have a problem with them either… provided developers who wish to bring their programming talents to the web take a little time to learn about the medium.

If you don’t take the time to understand how the web works, you’ll spend half your time cursing it and the other half trying to work around the things that frustrate you (which you will probably write off as “poorly designed” or “ill-conceived”). If you don’t understand how the web works, you’ll build fragile experiences that collapse like a house of cards when any one of your many dependencies—the network, JavaScript, some particular element or browser API—isn’t available. If you don’t understand how the web works, what you build will simply be on the web, not of it.

I don’t particularly care much about bringing “native like” “60fps” experiences to the web. It’s not that I don’t write software (I do), I just don’t really care if something I make for the web feels like a piece of installed software. I’ll do everything in my power to ensure my users have a great experience, but I know that each person’s experience will be a little bit different and I no longer feel the need to enforce my will on their experience. I’d rather create many ways for someone to interact with the things I build and hope one or more of those work well for whoever happens by and whatever device they happen to be using.

Native software and the web have always co-existed. We had installed software on computers long before the web even existed and we will continue to have installed software for as long as there are computers. Some software will move to the web if it makes sense for it to do so. Other software will remain native. Either option could be right or wrong depending on what you are trying to do. For instance, I would never personally write a photo editor in the browser because image processing requires a lot of memory and CPU cycles. Putting it in a browser moves it one more level away from the hardware. Abstraction eases development, but it invariably increases overhead and reduces performance.

Traditional software and the web can and should co-exist. They also can and should continue to inform one another. Ultimately, that will help us better serve the needs of our users, however they use our creations.

Change vs. Stagnation

Underpinning this whole “native vs. web” thing is, I think, a feeling many of us old-timers have that our web—the web we grew up building—is slipping away from us. We cling to the idea of the web as an open platform2 for people to share their thoughts, passions, and cat photos. We like the web as it was originally. We like the web as we made it.

The web is changing. In some ways it’s changing for the better, in some ways for the worse. It’s a far different beast today than when Tim Berners-Lee typed that first <HEADER> and you can certainly do a lot more in the browser now than you could when I first picked up HTML. But I don’t think halting progress on the web is desirable.

As Jake points out in his response, stagnation is not a good policy. Stagnation pretty much killed BlackBerry. It also led to a lot of developer frustration in the guise of IE 6.

Change is not inherently bad. It’s pace can be quite frustrating at times, though. PPK certainly seems to be feeling that way about its speed now just as Alex Russell lamented its plodding progress back in 2007. But when you take a step back, especially with a historical perspective, you see the changes are cyclical in many ways. The bandwidth issues we dealt with during the dial-up era are with us again in the form of mobile networks. The lessons we learned building a web for 640x480 screens are equally applicable in a world of wearables. And the text-based interactions we created in the very early days will serve as a template as we move boldly forward into the realm of voice-driven user experiences.

Cutting Edge vs. Craft

In his post, PPK also complained that we’re simply getting too many new features on the web, which makes it hard to keep up. More than that, however, it makes it hard to truly come to a deeper understanding of how these different pieces work. To really hone our craft. In other words, it’s becoming harder to be an expert generalist.

Jake and Bruce completely get this, as do I. Lyza Danger Gardener has even given an amazing talk on the topic. The sheer volume of new drafts, specs, and concepts (not to mention tooling options) is overwhelming. I’m sure I don’t know half of the features that are in the HTML5 spec, let alone the umpteen CSS3 modules. I probably never will. And I’m ok with that. I’ll pick and choose the bits I’m interested in playing around with and find ways to integrate them into my practice a little at a time. That’s how we learn. That’s how we’ve always learned.

To assuage PPK’s fears, however, I would argue that there are a lot more of us working on the web now than there were in the more leisurely paced days he remembers so fondly. And we’re sharing what we learn. Whether driven by altruistic desire to spread knowledge or an interest in rockstar-like fame in the industry (or even a smidge of both), it doesn’t really matter how it happens—the fact is that it does. We learn and we share. And the tools we have today make it even easier to do so. Not only do we have the usual magazine and blogging outlets, but we also have CodePen and JS Bin and Github and more.

We each have the capacity to research the hell out of one specific area of web design and be the conduit for that knowledge into the web design hive mind. Look at Sara Soueidan with SVG or Rachel Nabors with CSS Animation or Zoe Mickley Gillenwater with flexbox. Individually, we will never be able to learn it all, but collectively we can. Together, we can tackle any problem by accessing what we need to know when we need to know it.

Developer Convenience vs. User Needs

Another angle in this very dense piece from PPK was around tooling and polyfills:

We get ever more features that become ever more complex and need ever more polyfills and other tools to function—tools that are part of the problem, and not of the solution.

The whole “polyfill it and move on” movement has him a little annoyed. I share his sentiment. I don’t think a JavaScript-based solution should be considered “good enough” for interop. JavaScript is not guaranteed. Moreover, JavaScript implementations are also never going to be as fast as a browser implementation. If browsers want to pick up a polyfill and implement it behind the scenes, that’s fine because it will run faster, but loading up our websites with potentially megabytes worth of polyfills in order to use new “standards” seems ludicrous.

As an industry, we are doing an awful lot of navel gazing. We are spending more time solving our own development problems (legitimate in some cases, fabricated in others) by throwing more and more code at the problem. As a consequence, our users are paying the price in slower sites, heavier web pages, poor performance, and bad experiences (or no experience). And, on top of that, we’re solving our problems not their problems.

All is not lost

We are designers. Design is about solving problems for our users, not creating new ones for them. Whether we are writing code, sketching interfaces, authoring copy, curating content, or building servers, we should make each and every decision based on what will benefit our users. If it means we can’t use some shiny new technology, so be it. We can still play with the new stuff in our own browser, on our personal sites, and on CodePen. We can learn about them in our own experimentation and share that knowledge with the rest of our industry. We can improve our craft. The web can get better.

  1. I think I just threw up in my mouth a little. I hate using that word when speaking about the web, but there it is.

  2. In the other sense of the word.

Alex Jegtnes

@jukesie Quite probably—it was the first time I heard the phrase but I’m sure it was inspired by something previous!

# Posted by Alex Jegtnes on Wednesday, September 30th, 2015 at 3:13pm

Ron Waldon

“The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could beentirely different groups.”I really like this definition: enterprise stuff is stuff you didn’t chose yourself.https://adactio.com/journal/8245

# Posted by Ron Waldon on Friday, September 30th, 2016 at 10:38pm

Aaron Gustafson

Late last week, Josh Korr, a project manager at Viget, posted at length about what he sees as a fundamental flaw with the argument for progressive enhancement. In reading the post, it became clear to me that Josh really doesn’t have a good grasp on progressive enhancement or the reasons its proponents think it’s a good philosophy to follow. Despite claiming to be “an expert at spotting fuzzy rhetoric and teasing out what’s really being said”, Josh makes a lot of false assumptions and inferences. My response would not have fit in a comment, so here it is…

Before I dive in, it’s worth noting that Josh admits that he is not a developer. As such, he can’t really speak to the bits where the rubber really meets the road with respect to progressive enhancement. Instead, he focuses on the argument for it, which he sees as a purely moral one… and a flimsily moral one at that.

I’m also unsure as to how Josh would characterize me. I don’t think I fit his mold of PE “hard-liners”, but since I’ve written two books and countless articles on the subject and he quotes me in the piece, I’ll go out on a limb and say he probably thinks I am.

Ok, enough with the preliminaries, let’s jump over to his piece…

Right out of the gate, Josh demonstrates a fundamental misread of progressive enhancement. If I had to guess, it probably stems from his source material, but he sees progressive enhancement as a moral argument:

It’s a moral imperative that everything on the web should be available to everyone everywhere all the time. Failing to achieve — or at least strive for — that goal is inhumane.

Now he’s quick to admit that no one has ever explicitly said this, but this is his takeaway from the articles and posts he’s read. It’s a pretty harsh, black & white, you’re either with us or against us sort of statement that has so many people picking sides and lobbing rocks and other heavy objects at anyone who disagrees with them. And everyone he quotes in the piece as examples of why he thinks this is progressive enhancement’s central conceit is much more of an “it depends” sort of person.

To clarify, progressive enhancement is neither moral or amoral. It’s a philosophy that recognizes the nature of the Web as a medium and asks us to think about how to build products that are robust and capable of reaching as many potential customers as possible. It isn’t concerned with any particular technology, it simply asks that we look at each tool we use with a critical eye and consider both its benefits and drawbacks. And it’s certainly not anti-JavaScript.

I could go on, but let’s circle back to Josh’s piece. Off the bat he makes some pretty bold claims about what he intends to prove in this piece:

  1. Progressive enhancement is a philosophical, moral argument disguised as a practical approach to web development.
  2. This makes it impossible to engage with at a practical level.
  3. When exposed to scrutiny, that moral argument falls apart.
  4. Therefore, if PEers can’t find a different argument, it’s ok for everyone else to get on with their lives.

For the record, I plan to address his arguments quite practically. As I mentioned, progressive enhancement is not solely founded on morality, though that can certainly be viewed as a facet. The reality is that progressive enhancement is quite pragmatic, addressing the Web as it exists not as we might hope that it exists or how we experience it.

Over the course of a few sections—which I wish I could link to directly, but alas, the headings don’t have unique ids—he examines a handful of quotes and attempts to tease out their hidden meaning by following the LSAT’s Logic Reasoning framework. We’ll start with the first one.

Working without JavaScript Statement

  • “When we write JavaScript, it’s critical that we recognize that we can’t be guaranteed it will run.” — Aaron Gustafson
  • “If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold.” — Jeremy Keith

Unstated assumptions:

  • Because there is some chance JavaScript won’t run, we must always account for that chance.
  • Core tasks can always be achieved without JavaScript.
  • It is always bad to ignore some potential users for any reason.

His first attempt at teasing out the meaning of these statements comes close, but ignores some critical word choices. First off, neither Jeremy nor I speak in absolutes. As I mentioned before, we (and the other folks he quotes) all believe that the right technical choices for a project depend on specifically on the purpose and goals of that specific project. In other words it depends. We intentionally avoid absolutist words like “always” (which, incidentally, Josh has no problem throwing around, on his own or on our behalf).

For the development of most websites, the benefits of following a progressive enhancement philosophy far outweigh the cost of doing so. I’m hoping Josh will take a few minutes to read my post on the true cost of progressive enhancement in relation to actual client projects. As a project manager, I hope he’d find it enlightening and useful.

It’s also worth noting that he’s not considering the reason we make statements like this: Many sites rely 100% on JavaScript without needing to. The reasons why sites (like news sites, for instance) are built to be completely reliant on a fragile technology is somewhat irrelevant. But what isn’t irrelevant is that it happens. Often. That’s why I said “it’s critical that we recognize that we can’t be guaranteed it will run” (emphasis mine). A lack of acknowledgement of JavaScript’s fragility is one of the main problems I see with web development today. I suspect Jeremy and everyone else quoted in the post feels exactly the same. To be successful in a medium, you need to understand the medium. And the (sad, troubling, interesting) reality of the Web is that we don’t control a whole lot. We certainly control a whole lot less than we often believe we do.

As I mentioned, I disagree with his characterization of the argument for progressive enhancement being a moral one. Morality can certainly be one argument for progressive enhancement, and as a proponent of egalitarianism I certainly see that. But it’s not the only one. If you’re in business, there are a few really good business-y reasons to embrace progressive enhancement:

  • Legal: Progressive enhancement and accessibility are very closely tied. Whether brought by legitimate groups or opportunists, lawsuits over the accessibility of your web presence can happen; following progressive enhancement may help you avoid them.
  • Development Costs: As I mentioned earlier, progressive enhancement is a more cost-effective approach, especially for long-lived projects. Here’s that link again: The True Cost of Progressive Enhancement.
  • Reach: The more means by which you enable users to access your products, information, etc., the more opportunities you create to earn their business. Consider that no one thought folks would buy big-ticket items on mobile just a few short years ago. Boy, were they wrong. Folks buy cars, planes, and more from their tablets and smartphones on the regular these days.
  • Reliability: When your site is down, not only do you lose potential customers, you run the risk of losing existing ones too. There have been numerous incidents where big sites got hosed due to JavaScript dependencies and they didn’t have a fallback. Progressive enhancement ensures users can always do what they came to your site to do, even if it’s not the ideal experience.

Hmm, no moral arguments for progressive enhancement there… but let’s continue.

Some experience vs. no experience Statement

  • “[With a PE approach,] Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.” — Jeremy Keith
  • “If for some reason JavaScript breaks, the site should still work and look good. If the CSS doesn’t load correctly, the HTML content should still be there with meaningful hyperlinks.” — Nick Pettit

Unstated assumptions:

  • A clunky experience is always better than no experience.
  • HTML content — i.e. text, images, unstyled forms — is the most important part of most websites.

You may be surprised to hear that I have no issue with Josh’s distillation here. Clunky is a bit of a loaded word, but I agree that an experience is better than no experience, especially for critical tasks like checking your bank account, registering to vote, making a purchase from an online shop. In my book, I talk a little bit about a strange thing we experienced when A List Apart stopped delivering CSS to Netscape Navigator 4 way back in 2001:

We assume that those who choose to keep using 4.0 browsers have reasons for doing so; we also assume that most of those folks don’t really care about “design issues.” They just want information, and with this approach they can still get the information they seek. In fact, since we began hiding the design from non–compliant browsers in February 2001, ALA’s Netscape 4 readership has increased, from about 6% to about 11%.

Folks come to our web offerings for a reason. Sometimes its to gather information, sometimes it’s to be entertained, sometimes it’s to make a purchase. It’s in our best interest to remove every potential obstacle that can preclude them from doing that. That’s good customer service.

Project priorities Statement
  • “Question any approach to the web where fancy features for a few are prioritized & basic access is something you’ll ‘get to’ eventually.” — Tim Kadlec

Unstated assumptions:

  • Everything beyond HTML content is superfluous fanciness.
  • It’s morally problematic if some users cannot access features built with JavaScript.

Not to put words in Tim’s mouth (like Josh is here), but what Tim’s quote is discussing is hype-driven (as opposed to user-centered) design. We (as developers) often prioritize our own convenience/excitement/interest over our users’ actual needs. It doesn’t happen all the time (note I said often), but it happens frequently enough to require us to call it out now and again (as Tim did here).

As for the “unstated assumptions”, I know for a fact that Tim would never call “everything beyond HTML” superfluous. What he is saying is that we should question—as in weigh the pros and cons—of each and every design pattern and development practice we consider. It’s important to do this because there are always tradeoffs. Some considerations that should be on your list include:

  • Download speed;
  • Time to interactivity;
  • Interaction performance;
  • Perceived performance;
  • Input methods;
  • User experience;
  • Screen size & orientation;
  • Visual hierarchy;
  • Aesthetic design;
  • Contrast;
  • Readability;
  • Text equivalents of rich interfaces for visually impaired users and headless UIs;
  • Fallbacks; and
  • Copywriting.

This list is by no means exhaustive nor is it in any particular order; it’s what came immediately to mind for me. Some interfaces may have fewer or more considerations as each is different. And some of these considerations might be in opposition to others depending on the interface. It’s critical that we consider the implications of our design decisions by weighing them against one another before we make any sort of decision about how to progress. Otherwise we open ourselves up to potential problems and the cost of changing things goes up the further into a project we are:

The cost of changing your mind goes up the further into any project you are. Just ask any contractor you hire to work on your house.

As a project manager, I’m sure Josh understands this reality.

As to the “morally problematic” bit, I’ll refer back to my earlier discussion of business considerations. Sure, morality can certainly be part of it, but I’d argue that it’s unwise to make assumptions about your users regardless. It’s easy to fall into the trap of thinking that all of or users are like us (or like the personas we come up with). My employer, Microsoft, makes a great case for why we should avoid doing this in their Inclusive Design materials:

When we design only for others like us, we exclude everyone who is not like us.

If you’re in business, it doesn’t pay to exclude potential customers (or alienate current ones).

Erecting unnecessary barriers Statement

  • “Everyone deserves access to the sum of all human knowledge.” — Nick Pettit
  • “[The web is] built with a set of principles that — much like the principles underlying the internet itself — are founded on ideas of universality and accessibility. ‘Universal access’ is a pretty good rallying cry for the web.” — Jeremy Keith
  • “The minute we start giving the middle finger to these other platforms, devices and browsers is the minute where the concept of The Web starts to erode. Because now it’s not about universal access to information, knowledge and interactivity. It’s about catering to the best of breed and leaving everyone else in the cold.” — Brad Frost

Unstated assumptions:

  • What’s on the web comprises the sum of human knowledge.
  • Progressive enhancement is fundamentally about universal access to this sum of human knowledge.
  • It is always immoral if something on the web isn’t available to everyone.

I don’t think anyone quoted here would argue that the Web (taken in its entirety) is “the sum of all human knowledge”—Nick, I imagine, was using that phrase somewhat hyperbolically. But there is a lot of information on the Web folks should have access too, whether from a business standpoint or a legal one. What Nick, Jeremy, and Brad are really highlighting here is that we often make somewhat arbitrary design & development decisions that can block access to useful or necessary information and interactions.

In my talk Designing with Empathy (slides), I discussed “mystery meat” navigation. I can’t imagine any designer sets out to make their site difficult to navigate, but we are influenced by what we see (and are inspired by) on the web. Some folks took inspiration from web-based art projects like this Toyota microsite:

On Toyota’s Mind is a classic example of mystery meat navigation. It’s a Flash site and you can navigate when you happen to mouse over “hotspots” in the design. I’m pointing to one with a big red arrow here.

Though probably not directly influenced by On Toyota’s Mind, Yeshiva of Flatbush was certainly influenced by the concept of “experiential” (which is a polite way of saying “mystery meat”) navigation.

Yeshiva of Flatbush uses giant circles for their navigation. Intuitive, right?

That’s a design/UX example, but development is no different. How many Single Page Apps have you see out there that really didn’t need to be built that way? Dozens? We often put the cart before the horse and decide to build a site using a particular stack or framework without even considering the type of content we’re dealing with or whether that decision is in the best interest of the project or its end users. That goes directly back to Tim’s earlier point.

Progressive enhancement recognizes that experience is a continuum and we all have different needs when accessing the Web. Some are permanent: Low vision or blindness. Some are temporary: Imprecise mousing due to injury. Others are purely situational: Glare when your users are outside on a mobile device or have turned their screen brightness down to conserve battery. When we make our design and development decisions in the service of the project and the users who will access it, everyone wins.

Real answers to real questions

In the next section, Josh tries to say we only discuss progressive enhancement as a moral imperative. Clearly I don’t (and would go further to say no one else who was quoted does either). He argues that ours is “a philosophical argument, not a practical approach to web development”. I call bullshit. As I’ve just discussed in the previous sections, progressive enhancement is a practical, fiscally-responsible, developmentally robust philosophical approach to building for the Web.

But let’s look at some of the questions he says we don’t answer:

“Wait, how often do people turn off JavaScript?”

Folks turning off JavaScript isn’t really the issue. It used to be, but that was years ago. I discussed the misconception that this is still a concern a few weeks ago. The real issue is whether ot not JavaScript is available. Obviously your project may vary, but the UK government pegged their non-JavaScript usage at 1.1%. The more interesting bit, however, was that only 0.2% of their users fell into the “Javascript off or no JavaScript support” camp. 0.9% of their users should have gotten the JavaScript-based enhancement on offer, but didn’t. The potential reasons are myriad. JavaScript is great, but you can’t assume it’ll be available.

“I’m not trying to be mean, but I don’t think people in Sudan are going to buy my product.”

This isn’t really a question, but it is the kinda thing I hear every now and then. An even more aggressive and ill-informed version I got was “I sell TVs; blind people don’t watch TV”. As a practical person, I’m willing to admit that your organization probably knows its market pretty well. If your products aren’t available in certain regions, it’s probably not worth your while to cater to folks in that region. But here’s some additional food for thought:

  • When you remove barriers to access for one group, you create opportunities for others. A perfect example of this is the curb cut. Curb cuts were originally created to facilitate folks in wheelchairs getting across the road. In creating curb cuts, we’ve also enabled kids to ride bicycles more safely on the sidewalk, delivery personnel to more easily move large numbers of boxes from their trucks into buildings, and parents to more easily cross streets with a stroller. Small considerations for one group pay dividends to more. What rational business doesn’t want to enable more folks to become customers?
  • Geography isn’t everything. I’m not as familiar with specific design considerations for Sudanese users, but since about 97% of Sudanese people are Muslim, let’s tuck into that. Ignoring translations and right-to-left text, let’s just focus on cultural sensitivity. For instance, a photo of a muscular, shirtless guy is relatively acceptable in much of the West, but would be incredibly offensive to a traditional Muslim population. Now your target audience may not be 100% Muslim (nor may your content lend itself to scantily-clad men), but if you are creating sites for mass consumption, knowing this might help you art direct the project better and build something that doesn’t offend potential customers.

Reach is incredibly important for companies and is something the Web enables quite easily. To squander that—whether intentionally or not—would be a shame.

Failures of understanding

Josh spends the next section discussing what he views as failures of the argument for progressive enhancement. He’s of course, still debating it as a purely moral argument, which I think I’ve disproven at this point, but let’s take a look at what he has to say…

The first “fail” he casts on progressive enhancement proponents is that we “are wrong about what’s actually on the Web.” Josh offers three primary offerings on the Web:

  • Business and personal software, both of which have exploded in use now that software has eaten the world and is accessed primarily via the web
  • Copyrighted news and entertainment content (text, photos, music, video, video games)
  • Advertising and marketing content

This is the fundamental issue with seeing the Web only through the kens of your own experience. Of course he would list software as the number one thing on the Web—I’m sure he uses Basecamp, Harvest, GitHub, Slack, TeamWork, Google Docs, Office 365, or any of a host of business-related Software as a Service offerings every day. As a beneficiary of fast network speeds, I’m not at all surprised that entertainment is his number two: Netflix, Hulu, HBO Go/Now… It’s great to be financially-stable and live in the West. And as someone who works at a web agency, of course advertising would be his number three. A lot of the work Viget, and most other agencies for that matter, does is marketing-related; nothing wrong with that. But the Web is so much more than this. Here’s just a fraction of the stuff he’s overlooked:

  • eCommerce,
  • Social media,
  • Banks,
  • Governments,
  • Non-profits,
  • Small businesses,
  • Educational institutions,
  • Research institutions,
  • Religious groups,
  • Community organizations, and
  • Forums.

It’s hard to find figures on anything but porn—which incidentally accounts for somewhere between 4% and 35% of the Web, depending on who you ask—but I have to imagine that these categories he’s overlooked probably account for the vast majority of “pages” on the Web even if they don’t account for the majority of traffic on it. Of course, as of 2014, the majority of traffic on the Web was bots, so…

The second “fail” he identifies is that our “concepts of universal access and moral imperatives… make no sense” in light of “fail” number one. He goes on to provide a list of things he seems to think we want even though advocating for progressive enhancement (and even universal access) doesn’t mean advocating for any of these things:

  • All software and copyrighted news/entertainment content accessed via the web should be free. and Netflix, Spotify, HBO Now, etc. should allow anyone to download original music and video files because some people don’t have JavaScript. I’ve never heard anyone say that… ever. Advocating a smart development philosophy doesn’t make you anti-copyright or against making money.
  • Any content that can’t be accessed via old browsers/devices shouldn’t be on the web in the first place. No one made that judgement. We just think it behooves you to increase the potential reach of your products and to have a workable fallback in case the ideal access scenario isn’t available. You know, smart business decisions.
  • Everything on the web should have built-in translations into every language. This would be an absurd idea given that the number of languages in use on this planet top 6,500. Even if you consider that 2,000 of those have less than 1,000 speakers it’s still absurd. I don’t know anyone who would advocate for translation to every language.1
  • Honda needs to consider a universal audience for its marketing websites even though (a) its offline advertising is not universal, and (b) only certain people can access or afford the cars being advertised. To you his first point, Honda does actually offline advertising in multiple languages. They even issue press releases mentioning it: “The newspaper and radio advertisements will appear in Spanish or English to match the primary language of each targeted media outlet.” As for his second argument… making assumptions about target audience and who can or cannot afford your product seems pretty friggin’ elitist; it’s also incredibly subjective. For instance, we did a project for a major investment firm where we needed to support Blackberry 4 & 5 even though there were many more popular smartphones on the market. The reason? They had several high-dollar investors who loved their older phones. You can’t make assumptions.
  • All of the above should also be applied to offline software, books, magazines, newspapers, TV shows, CDs, movies, advertising, etc. Oh, I see, he’s being intentionally ridiculous.

I’m gonna skip the third fail since it presumes morality is the only argument progressive enhancement has and then chastises the progressive enhancement community for not spending time fighting for equitable Internet access and net neutrality and against things like censorship (which, of course, many of us actually do).

In his closing section, Josh talks about progressive enhancement moderates and he quotes Matt Griffin on A List Apart:

One thing that needs to be considered when we’re experimenting … is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? … Context, as usual, is everything.

In other words, it depends, which is what we’ve all been saying all along.

I’ll leave you with these facts:

  • Progressive enhancement has many benefits, not the least of which are resilience and reach.
  • You don’t have to like or even use progressive enhancement, but that doesn’t detract from its usefulness.
  • If you ascribe to progressive enhancement, you may have a project (or several) that aren’t really good candidates for it (e.g., online photo editing software).
  • JavaScript is a crucial part of the progressive enhancement toolbox.
  • JavaScript availability is never guaranteed, so it’s important to consider offering fallbacks for critical tasks.
  • Progressive enhancement is neither moral nor amoral, it’s just a smart way to build for the Web.

Is progressive enhancement necessary to use on every project?

No.

Would users benefit from progressive enhancement if it was followed on more sites than it is now?

Heck yeah.

Is progressive enhancement right for your project?

It depends.

My sincere thanks to Sara Soueidan, Baldur Bjarnasun, Jason Garber, and Tim Kadlec for taking the time give me feedback on this piece._

  1. Of course, last I checked, over 55% of the Web was in English and just shy of 12% of the world speaks English, so…

Aaron Gustafson

This is an excellent and well-argued piece from Dieter Bohn. In it, he argues that “the web” is characterized by two things:

  1. URLs and
  2. Client agnosticism.

Reading this, I’m reminded of a lot of Jeremy’s writings about products being “on the web” rather than “of the web”. It’s an incredibly important distinction in my mind because, as Dieter so eloquently puts it

The openness of the web allowed small companies to become big ones without seeking permission from the biggest ones. Preserving the web, or more specifically the open principles behind it, means protecting one of the few paths for innovation left in the modern tech world that doesn’t have a giant company acting as a gatekeeper. And there’s reason not to trust those giant companies: there’s much less incentive to encourage openness when you have a massive empire to defend.

These are important things to consider when deciding where to invest your time and energy.

Read on The Verge

8 Shares

# Shared by Chris Eskow on Friday, January 23rd, 2015 at 4:27pm

# Shared by Matt Hill on Friday, January 23rd, 2015 at 7:23pm

# Shared by Ryan Riley on Saturday, January 24th, 2015 at 4:41am

# Shared by Gunnar Bittersmann on Saturday, January 24th, 2015 at 3:42pm

# Shared by Benedikt Kastl on Saturday, January 24th, 2015 at 3:54pm

# Shared by Joel Molascon on Saturday, January 24th, 2015 at 8:12pm

# Shared by Jan Skovgaard on Wednesday, January 28th, 2015 at 6:42am

# Thursday, February 8th, 2018 at 9:37pm

14 Likes

# Liked by Ethan Marcotte on Friday, January 23rd, 2015 at 4:08pm

# Liked by Orde Saunders on Friday, January 23rd, 2015 at 4:08pm

# Liked by Chris Eskow on Friday, January 23rd, 2015 at 4:29pm

# Liked by Matthew Fabb on Friday, January 23rd, 2015 at 4:29pm

# Liked by Matt Menzer on Friday, January 23rd, 2015 at 5:18pm

# Liked by Petra Gregorová on Friday, January 23rd, 2015 at 6:06pm

# Liked by Andrew Pulley on Friday, January 23rd, 2015 at 7:29pm

# Liked by Matt Hill on Friday, January 23rd, 2015 at 7:45pm

# Liked by Nikhilesh K Singh on Saturday, January 24th, 2015 at 10:41am

# Liked by Gunnar Bittersmann on Saturday, January 24th, 2015 at 3:55pm

# Liked by Benedikt Kastl on Saturday, January 24th, 2015 at 4:03pm

# Liked by Joel Molascon on Saturday, January 24th, 2015 at 8:21pm

# Liked by Atul Acharya on Monday, January 26th, 2015 at 5:47am

# Liked by Jan Skovgaard on Wednesday, January 28th, 2015 at 6:44am

Related posts

Pickin’ dates

HTML web components for augmenting date inputs.

Progressive disclosure defaults

If you’re going to toggle the display of content with CSS, make sure the more complex selector does the hiding, not the showing.

Schooltijd

Going back to school in Amsterdam.

HTML web components

Don’t replace. Augment.

Read-only web apps

It’s fine to require JavaScript for read/write functionality. But have you considered a read-only mode without JavaScript?

Related links

CloseBrace | A Brief, Incomplete History of JavaScript

Another deep dive into web history, this time on JavaScript. The timeline of JS on the web is retroactively broken down into four eras:

  • the early era: ~1996 – 2004,
  • the jQuery era: ~2004 – 2010,
  • the Single Page App era: ~2010 - 2014, and
  • the modern era: ~2014 - present.

Nice to see “vanilla” JavaScript making a resurgence in that last one.

It’s 2017, the JavaScript ecosystem is both thriving and confusing as all hell. No one seems to be quite sure where it’s headed, only that it’s going to continue to grow and change. The web’s not going anywhere, which means JS isn’t going anywhere, and I’m excited to see what future eras bring us.

Tagged with

How would you build Wordle with just HTML and CSS? | Scott Jehl, Web Designer/Developer

This is a great thought exercise in progressive enhancement …that Scott then turns into a real exercise!

Tagged with

PodRocket - A web development podcast from LogRocket: HTML web components with Chris Ferdinandi

I somehow missed this when it came out in January but Amber just pointed me to it—an interview with Chris about HTML web components, available for your huffduffing pleasure.

Tagged with

jgarber623/aria-collapsible: A dependency-free Web Component that generates progressively-enhanced collapsible regions using ARIA States and Properties.

This is a really lovely little HTML web component from Jason. It does just one thing—wires up a trigger button to toggle-able content, taking care of all the ARIA for you behind the scenes.

Tagged with

HTML Web Components on the Server Are Great | Scott Jehl, Web Designer/Developer

Scott has written a perfect description of HTML web components:

They are custom elements that

  1. are not empty, and instead contain functional HTML from the start,
  2. receive some amount of progressive enhancement using the Web Components JavaScript lifecycle, and
  3. do not rely on that JavaScript to run for their basic content or functionality.

Tagged with

Previously on this day

21 years ago I wrote Jim Page

Standing in front of what is supposedly the original Starbucks (not true: the original building was demolished) Jim Page sang a song about Seattle:

21 years ago I wrote Wireless in Seattle

I’m in Seattle and I’m blogging wirelessly from a Starbucks.

22 years ago I wrote To catch a thief

Here is a fantastic tale of ingenious detective work.