deck.js lite: modern HTML presentations

Taking a page out of Don Stewart’s book I’m planning to release a project to the Internet every week or two. Most, if not all, of them will be open source and hosted on Github. I’ll be posting blurbs about them on this blog filed under a new category – Projects. Feel free to follow along or fork away.

Deck.js is a very cool project that provides a set of CSS and JavaScript templates that let you create clean, elegant slideshows using HTML. I’m becoming increasingly attracted to HTML as a general purpose documentation format so seeing like deck.js makes me really happy.

I’m currently using deck.js to put together a presentation for a class I’m taking, but while at it I thought I’d do some reorganization of the deck.js codebase to make things a little easier. The files that you need to include to use deck.js are currently spread out between a number of different folders meaning that as a user it might take you a while to figure out where everything is and what you need to include. So in the spirit of open source I decided to fork the repo on Github and create a ‘lite’ version.

This version (also available on Github under the same license) packs all the files into a single folder, shortens some names and paths and removes some things (tests and SCSS files) that users might not care about. I’ve also updated the introductory presentation to point to the new files so you can use that as a template for making your own slides. I’ve been talking to deck.js’ creator, Caleb Troughton and I plan to keep the ‘lite’ version in sync with the main repo so that you’re always using the latest and greatest.

If there’s anything else you’d like to see in a lite version (or just in deck.js in general) please let me know. I think the best days of the web are just ahead and having web-native slideshows is definitely a step in the right direction.

PS. In case you’re wondering: no, The ByteBaker is not going to become just an announcement board for my projects. However, graduate school is taking up a lot of my time and energy right now. Also I think it’s important that I keep to releasing one project a week. The best I way I can think of keeping to that is by documenting my progress online. Normal programming will resume soon.

Advertisements

The Web is for Documents: Part I

I’ve always had something of a love-hate relationship when it comes to webapps. I use a lot of them and by and large I like them. But there was always something about them that seemed just a tad bit … unnatural. I could never quite put my finger on it and over the years as I started to using them more and more I put my uneasiness down to just the newness of the whole thing. By and large, I managed to put it out of my mind or learn to just live with it.

It only came back to me a few weeks ago as I was making plans for an independent study. See, one of the larger gaps in my knowledge of computer technology is networking in general and the Web in particular. I wanted to change that to some extent before I left college and since I had just one semester left I decided to spend my last semester building a webapp of some sort. But when I did that the uneasiness I had felt all along came flooding back. Though I knew that very powerful applications were being built using the current set of Web technologies (mainly HTML, CSS and JavaScript) as I read more and more about web programming something felt wrong. People were writing these huge frameworks and JavaScript libraries in order to build these great programs that ran essentially the same no matter where in the world you were as long as you were running a modern browser. Though it was a great idea and I’m sure lots of hard work had gone into it all, something felt out of place. After exploring the world of JavaScript frameworks and CSS generation tools, I think I’ve stumbled upon the answer.

The thing is, the Web was never built to be a host for dynamic applications. The World Wide Web was (and is) a platform for sharing and displaying documents and it’s only recently that we’ve been trying to hack that document-based framework to enable everything we’re seeing now. Even as the web evolves, the basic standards are still very much true to the Web’s document-based roots. The newest HTML5 specification actually adds a number of semantic elements such as headers, footers, asides and section tags that will help us create better, more meaningful documents. HyperText is ultimately a semantic markup language, no matter how much we try to hack it to be a GUI layout language. JavaScript ultimately manipulates a Document Object Model (the DOM). The inherent document nature of the Web and everything built on it isn’t something that can be ignored and it’s certainly not something that is going away any time soon.

So does this mean that webapps are bad or doomed to failure? Not at all. But it does mean that there are some things that we need to keep in mind as we build and use them. JavaScript does provide a very powerful (and increasingly fast) tool for manipulating our documents in real time and CSS is a good approach for styling and changing presentation (though the language itself could use some work). In order to build webapps that are both useful and feel natural in the context of the web, we should always have the web’s document basis in mind. Webapps that acknowledge and embrace this will have a better time than those that want to only recreate desktop-interfaces on top of HTML5 technologies.

Even today, the most elegant webapps are the ones that have embraced the document idea: Gmail and Simplenote make no pretense to be or mimic desktop apps. The reason that Gmail quickly became more popular than almost any other webmail client out there is that they took a different approach from everyone else: Gmail didn’t try to look or feel like a full desktop app, but it wasn’t just a webpage with links to your messages either. There was a very delicate balance of dynamism and static presentation that makes Gmail so great for the web (as well as no annoying banner ads).

I think the rise of the mobile web and the app store model for mobile devices is helping this new model of webapp become more popular. We’re seeing the rise of cloudtop services — services where the web interface is just one of a group of ways of interacting with the service. Take for example Simplenote and Dropbox. Both have a decent web interface, but also have mobile and desktop apps for the popular platforms and an API allowing others to build inventive applications on top of their services. This means that the webapp doesn’t have to be the be-all and end-all of the user interface. There are many interfaces, each playing to the strengths of their respective platforms.

Of course not all services are going this route. 37signals makes some great web software (or so I’ve heard, I’m not a customer myselft). They’re going all out Web, at least for their Basecamp product. Will it work? Maybe. They claim it’s because they don’t want to have specialized apps for each platform. But the web itself is a platform and the fact that they say that you need a WebKit mobile browser makes it sound like they’re just choosing the web platform instead of native mobile platforms. I personally don’t agree with their direction (and their stated reasons for it), but it will be interesting to see what happens.

I think we’re living in a very exciting time with our technology going in numerous interesting directions. As the idea of cloudtop services becomes more popular, we’re going to see a plethora of native applications that play to the strengths of their native platforms. The ones that are successful on the web will embrace it’s document nature instead of trying to ape desktop apps. And it’s not just apps that we should be looking at, the meaning and scope of documents themselves will change and become better as the Web evolves and its technologies evolves. Stay tuned for part II where I look at some novel types of documents that the web is enabling.

I got a tumblelog

I got a tumblelog. Domain wise it’s part of my static site at Basu::shr. Behind the scenes it’s a basic Tumblr weblog with a nice looking theme and little else. I already have a proper weblog (this one) and a static website. I also have Twitter and Identi.ca accounts and I have a Friendfeed which pulls in updates from lots of different services that I use. So why yet another weblog?

The answer is that the web, especially the so-called Web 2.0 has been becoming more UNIX-like as time goes on. There are lots of different webapps out there, and the best ones focus on doing one thing and doing it well. It’s up to others to pull these webapps together via their APIs in a manner similar to the way UNIX shell scripts work. While this is in general a good thing, it can be a hassle for someone like me who would like to keep together all his/her online activity.

For a while I really wished there was One Great Webapp to Rule Them All. It would be this one great system into which I could put all my status updates, my pictures, videos, links, conversations and it would automatically send them out to whichever specific webapp they needed to go to. And much to my delight I found one just like that: Posterous. You send an email to Posterous containing whatever you want to post and Posterous can be setup to direct to a number of different webapps. This is a really cool thing, using email as a web equivalent of UNIX pipes. I tried it for a few days, and while I was happy for to start with, I came to realize some interesting things.

The first thing that I realized was email for all it’s flexibility and usefulness, it can be a bit tedious for some activities. If all you want to do is send out a 140 character update or post a video, it’s just a bit too much to switch to a mail client, copy/paste a link or type a message, select a recipient and then hit ‘Send’. Secondly, for conversation oriented media like Twitter, sending out your message is only half the problem and it makes no sense to use one tool to send out a message (email) and another to see incoming messages (another app or webapp). Add in the fact there are lots of small applications or browser extensions that do a really good job of putting on an easy-to-use layer on top of web services and email starts to lose its silver lining.

However, the greater realization I had was I that I didn’t necessarily wanted all my online activity pulled into one place. For example, this blog is about technology and my experiences with it and it’s not something that my liberal arts friends particularly care about. On the other hand, readers coming to this site to read about my adventures with programming languages probably don’t want to know all that much about what how the dining hall food is today or how tired I am after my creative writing class (things that go into my Twitter stream). I wouldn’t want to mix those two because the result would simply be a mess. I also don’t want to add things like cool videos, art or articles I find to either of these two unless I do want to blog about it (in which case I will write a post about it) or I really want my friends to know about it (in which case I’ll twitter it). By yesterday morning I decided that I still wanted to have an online, accessible record of stuff I found interesting (if anyone else really wanted to see) but I didn’t want to just dump it into the other streams.

Thus came about the tumblelog. I could have just stuck to my Posterous but I like Tumblr better, in part because of the gorgeous themes (which I hear can be used with Posterous, but I couldn’t find an easy way to do it) but also because it seems that Tumblr, especially the bookmarklet, processes excerpts from websites in a smarter way than Posterous. And I already had a Tumblr account that I started a few months ago, but I really didn’t use till now.

The way things stand now, here is how I currently use my multiple web services:

  • The ByteBaker for long-form tech-oriented articles
  • Twitter and Identi.ca for really short observations, ideas and messages
  • Basu::Shr::Weblog as a tumblelog for recording interesting things I find online, mostly videos and images
  • Diigo for interesting links that I want to keep a record of, but don’t care to actively share
  • Friendfeed to pull together everything about (plus a few others) for anyone who’s interested

Considering that this isn’t the first time that I’ve done this dance, I won’t be too surprised if I changed this setup again soon. At the current moment, the services and the tools around seem stable and useful and I’ve been able to use them with very little mental overhead (which is very important for me). Only time will tell if this works out, but I hope it does. On a related note, I’ve also started decoupling Facebook from my online presence because I’m growing increasingly uncomfortable with their “Walled Garden” approach, but that’s a matter for another article.

Getting social networks under control

Social networks. We all use them (to some extent) and there are a lot of them. I appreciate the services they provide and enjoy using them, but it can be hard to not spend too much time and energy micromanaging. For people who use just one social utility there isn’t much of a problem: everything that they do goes into that one environment. However, chances are you use more than one. It’s perfectly fine to keep them completely separated from each other and I’m sure that works for a lot of users. But I personally would like to provide a coherent image of myself across the sites that I do use. As I found out over the weekend, that’s not as simple as it sounds.

The Facebook Factor

Before I talk about how I actually went about trying my social networks, there are a  few things about the networks themselves that need to be said. First off, Facebook is the unstoppable juggernaut when it comes to social networks. I didn’t really think about this until I started out on my quest, but Facebook does (in a limited way) a lot of things which the other social networks individually focus on. Like most other people, you can use Facebook to keep in touch with friends and make new ones. You can post photos to Facebook albums. I haven’t heard any word on limits except being limited to 60 photos per album. There’s also the status line where you can broadcast what you’re doing to everyone who wants to listen. Conversely the newsfeed lets you keep track of everyone else. You can write notes and use them as a lightweight blog. Interesting links and videos you find around the web can also go on Facebook. And everything you do has a chance to become the basis of a longwinded, interactive discussion between your friends.

Facebook’s purpose as a social network is to essentially surround all the other social networking utilities. Posting a video on YouTube or saved an interesting link on Delicious? Put a link to it on Facebook. Wrote a blog post? Post a link to it or have it be automatically mirrored in your notes (learned this trick from a new friend of mine). Want to share photos of your hike with everyone who went with you? Put in a Facebook album, tag your friends and they’ll be automatically notified. No need to decide whether to use Flickr, Picasa, Zooomr or worry about how you’r going to let others know that their photos are up. If you need a place to discuss something, you can start a Facebook group and invite your friends rather than having to set up a mailing list or forum. Though Facebook does all these things decently well, it doesn’t do them quite as well as the other sites that are out there for specifically those purposes. And Facebook is definitely an ‘inside-out’ community: to use it well you need to have a network of friends first. On the other hand, if your goal is to reach out and find new people to connect with, you’re better off using another social site.

Facebook was my first problem. I tend to think of Facebook as a place for connecting with people I already know fairly well. It’s a good way to keep my friends updated on what I’m doing, especially if they aren’t using Twitter or Delicious or following my blog regularly. I really want Facebook to be part of my social network, but since it won’t be directly help me connect with new people, I want to use it passively as much as possible. To be fair, I suppose Facebook could probably be used for ‘reaching-out’ purposes, but I personally don’t want to do that since in my opinion there are better tools for that. Luckily for me, Facebook seems to understand it’s role as an aggregator of sorts: it makes it simple to integrate your blogs as notes and post your Delicious links and shared Google Reader feeds to the newsfeed. So even though most of online activities take place elsewhere, almost all of it is seamlessly mirrored on Facebook for all my friends to see and discuss.

A Network of Networks

The second part of the problem involves the many services that I use. The services that I use most frequently are Twitter, Delicious, this blog, Last.fm and Facebook (not necessarily in that order). I do have a Picasa account, but I mainly use that for sharing pictures with my parents and close family and so it’s not really part of my social net. For a while I really wanted to integrate all this together. The Ping.fm service let me do just that. I could post something to Ping.fm and it would be sent to Twitter and Facebook and any links would be saved to Delicious. Using Twitterfeed I could have blog updates routed to Ping.fm as well. This setup was worked pretty well and there was nothing wrong with the implementation itself. However, I found some flaws in working this way that involved the very concept of trying to pull everything together.

All the services I’ve named above are all very different. That’s probably part of the reason why Facebook only offers generic clones of them (for now at least). Twitter is good at communicating small snippets of information while my blog is for far more detailed writings (my posts are routinely over a thousand words). Using Delicious with Ping.fm allows me to quickly save anything from the web, but I also don’t fully utilize their tagging functionality which helps greatly when trying to find something later. By using Ping.fm to quickly post to everything, I was losing out on the focused functionality that each of them offered.

I found myself faced with a classic dilemma: I wanted people who ‘followed’ me to able to see everything that I did socially, but I also wanted to be able to use the individual social services to their full potential. I found the answer (to some extent) in the form of Friendfeed. It’s an aggregator for all your social content, not entirely unlike the role that Facebook fills. Friendfeed gathers all your activity from as many as 58 service and provides a single unified feed that people can follow and interact with. My Friendfeed collects information from all the services I listed above as well as Google Reader’s shared feeds. In an ideal world, everyone would see my Friendfeed and interact mainly via comments on that (or on the original sources).

However, it’s not an ideal world. In particular, most people I know (and who know me) don’t actually use Friendfeed (unfortunately). To get around the fact that not everyone is going to see the collected feed that I present, the most reasonable solution is to maintain a careful amount of redundancy. I say careful because if I were to simply link everything to everything, not only would people be seeing the same thing many times over, I could also potentially set up infinite loops with messages being propagated from service to service and back ad infinitum. My first step in this careful redundancy setup is to keep Facebook separate. I use its own native integration features to pull information from the other services, but nothing pulls from Facebook. I generate very little original content on Facebook and so it’s mainly just for collection and discussion.

The second step is to take into account the distinctions between the services and use them accordingly. For example, Twitter is used for status updates so it gets used the most. Delicious is used for saving stuff from the web. Previously I had everything that I saved in Delicious appear on Twitter as well (via Ping.fm), but I’ve come to realize that I save a fair of stuff for my research work which I don’t think everyone wants to know. By separating the services, I can only post things which I think people will find interesting to Twitter. I can also use Twitter to elicit responses and feedback from followers while I use Delicious more for classification. I’m also considering moving to Diigo, but that’s a subject for another post. My blog also gets mirrored to Twitter via Twitterfeed because I think that’s an important part of what I want people to know about. Of course, if you are following me on Friendfeed, you will see all my bookmarks and some duplicates. That’s something I’m still working to solve, I’m not comfortable with having my followers see the same things multiple times.

The third and final piece of the puzzle involves Twitter. Though I use it mainly to send out updates to my followers, it also becomes a medium for discussion with people retweeting and replying to what I’ve said. Unfortunately the native Twitter interface isn’t the best for managing a multi-way communication stream and I’ve found that I can miss out on a lot if I don’t pay careful attention. Luckily there are a number of Twitter clients out there that do a good job of managing your Twitter traffic for you. The one I currently use is TweetDeck which provides a nice multi-panel layout for seeing general tweets, replies, mentions of your name and direct messages side by side.

In conclusion, the problem of managing your social networks is still a tricky one to solve and requires some careful thought to get right. Even then, it’s still not a perfect solution by any means (especially if your friends aren’t on the same networks). As time progresses, it’ll be interesting to see how social networks evolve. Facebook in particular is looking to place itself as the center of the social web. Personally I’m not a big fan of having Facebook be the center of everything, but it could have it’s benefits. But for the time being, Facebook does have serious competitors each with their own strengths and there are some things (like this blog) that I would prefer to happen outside any single social network.

It’s time for web 1.0 to die

Yes, this is going to be a controversial topic to write on and yes, I have a history of terribly messing up controversial topics. But I’m going to make an effort to keep my thoughts clear and justifiable. Without sounding too academic, let me first present my thesis:

The original internet and World Wide Web was one of the greatest technological innovations of human history. It has an interesting history fundamentally linked to many other developments in the past few decades. Any self-respecting computer geek should take a few hours and read about the start and growth of the net. However, in the 21st century, the old web is no longer enough. It is time for us to accept that the old days of plain HTML and static links are just that: old days. I’m going to show that important developments have made it necessary for us to look at the web in a new light and we must actively embrace these changes so that we can have a hope of shaping them into something as monumental as the original web.

Clarification: I’ve received my comments that made me realize that I should clarify what I mean by web 1.0. For the rest of this post I’ll take it to mean the web in terms of mostly human-generated, static pages with mostly static content. By contrast web 2.0 can be defined to be mostly auto-generated, constantly changing content where the main human role is to create the content, not manage its organization or delivery.

Trip down memory lane

In many ways the start of the internet can be traced back to Douglas Engelbart’s famous Mother of All Demos in 1968. This demo showed of the revolutionary NLS (oNLine System) which was years ahead of its time and presented important concepts such as voice and video conferencing, complex document formats and hyperlinks.

Fast forward a few decades and the internet was slightly more mainstream by the early 1990’s. Though the internet had implemented Engelbart’s vision to some extent, it was actually quite primitive by the standards of the demo. Hyperlinks were simple one-way roads instead of the ubiquitous and powerful cross-linking system that was part of the demo. Rich media communication was still years away. However, the key parts were in place. The early web was a rather simple place to live in. Mostly text, followed later by simple graphics. Pages were static and more often than not crafted by hand. For much of the 1990s the web bore the signs of the academic and somewhat austere and information dense culture that had spawned it.

That web is not the web of today. With low bandwidth, information density needs to be maximized, presentation is often an afterthought. While there are a small number of producers and a large number of viewers, it’s fairly simple to do things by hand. The drastic change of the internet in the past few years can be attributed to two main causes:

  1. The growth of scripting languages and applications built on them have allowed a massive explosion in the number of content producers. It’s possible to build a fairly high quality website without writing a single line of code.
  2. Bandwidth, storage and processing prices have fallen, opening the door for rich media applications, which can be distributed across the Internet.

Dynamism’s Day

The game of web development has changed at a fundamental level. You can no longer ship a website to a customer and expect it to stay the same way for months at a time. Dynamic content is the order of the day. Blogs are no longer creations of random individuals with too much time on their hands. Instead, if you want to have a viable web presence, you’d better have a blog and you’d better update it regularly. This goes for individuals as well as corporations (especially those whose business is the web). This in turns means that hand-rolling a website is impractical at best and downright stupid at worst. A recent article proclaimed that Dreamweaver is Dying, and I have to grudgingly agree. Over the past two years, I tried maintaining an old style website by hand with mostly static content, but it’s simply not worth it. I would rather spend 5 minutes working on a new post than fiddling around with HTML and CSS to get things to look right. And don’t even get me started with keeping links and navbars up to date. Dreamweaver and similar tools help. A lot. But they’re not enough.

Web 2.0 is showing us an important fact that we must not ignore: content creators do not need to be programmers as well. What that means for you and me is that if you want to create a content-focused site (and that’s what most of us want really) then an automated content management system like WordPress or Drupal combined with good themes and widgets should be our first pick. Only if the need for customization becomes overbearing (ie. you need to forge a brand image with a custom logo and theme) should you consider diving into the code.

For web developers, this means that you should pick a content platform or two and learn it back to front and inside out. Try porting some previous designs and suggest that customers take a more active role in creating their web presence. The web’s hallmark is that it is by definition a bidirectional medium. If anyone wants to be successful on the web, that bidirectionalim must be respected and utilized.

To generate the best content, you want to have good tools. Unfortunately HTML and CSS simply aren’t good tools for writers, journalists or artists. They want, no, they need WYSIWYG editors where they can see what their content will look like without worrying about the layers beneath. Cheap bandwidth means you can now use heavier technologies like Flash to build better looking tools. Cheap storage and processing power means that you can generate the webpages on the fly while keeping the actual content neatly stored in a backend database. HTML and CSS aren’t the core technologies of the web anymore; they’re a thin veneer that hides the raw power underneath and gives everyone a simpler, focused view of what they need to see.

Let’s face it, HTML is ugly. It’s angle-bracket hell and you know it!! No human should have to write that sort of thing by hand. CSS is better, but not by much. A lot of people complain that autogenerators create really bad, really redundant HTML and CSS. That’s true and I thought that was a bad thing and I hand crafted my code until not too long ago. But the fact is: no one cares!! No one is ever going to really read your generated code except browsers and they’re mostly pretty tolerant of what they’re fed as long as it’s not downright wrong. The massive boost in productiveness far outweighs any aesthetic qualms you may have. Sure you still need to do some amount of tweaking to get things just right, but it’s very easy to get good enough without doing any tweaking at all.

On another note, the database + dynamic code creates far better presentation/data separation than HTML/CSS ever could. You had to be really disciplined to not mix presentation consideration into your supposedly semantically structured HTML and you never got it quite right. Admit it, you know you bent the rules. But with web 2.0, such separation is natural. Theming is at the heart of almost every major CMS out there and the web is a better place thanks to it.

But wait, there’s more…

CMS’s are just the tip of the iceberg. Web 2.0 is becoming an application platform, much closer to Engelbart’s vision than to Web 1.0’s “information highway”. It’s not just content creation and delivery, it’s active interaction that is getting the spotlight. And HTML/CSS utterly fails at this. Text and graphics are ok, sound and video are good, but active interaction is even better. Web apps may not be as feature-rich as their desktop equivalents, but they’re not far behind either. I use Google Docs almost as much as I use Word and web-based IDEs are starting to become a reality. Take a look at the SproutCore and Cappuccino web frameworks for some examples of what’s on the horizon.

The limitations of Web 1.0 have spawned the developments of multiple ways around them. The web is a now a mix of different document formats and PDFs are gradually becoming the document interchange format of choice for many organizations and companies. Streaming media has proved to be a much more popular alternative than simply offering up files for download. The dynamic web is a much faster and more interesting place than 1.0 could ever be.

We’re only starting to explore the web as a core component of personal computing. Cloud computing is still a very nascent technology, but one that looks like will it progress by leaps and bounds in the years to come. Amazon S3 lets you store practically unlimited amounts of data for really cheap making it possible for any technologically savvy individual or group to roll their own webapp without investing in massive computing resources first. Why buy a external hard drive when you can pay a small monthly free for crash-proof, access-anywhere storage? (If you trust them with your data that is, but that’s for another post…

Web 1.0 is simple and for the past two decades it has served the needs of human society admirably. But we need more now. There is a massive amount of computing power in the world today, but we can’t use it properly if we stick to old fashioned HTML. The internet is no longer a web of statically linked pages. It’s a complex network of rapidly changing web applications interfacing with each other on a number of levels. It’s more like a growing, vibrant ecosystem than it is a spider-web. Web 2.0 is alive.

In the near future…

We must learn to start treating the web like a vast collection of interacting programs and not just as a simple file hierarchy. The author of the Dreamweaver is dying article says (quite rightly):

In the relatively near future every website will be a dynamically-generated web application and all of today’s sites built on multiple static pages will be ripped out and replaced.

I think that is pretty close to the truth. Sure there will be still be some holdovers, but any serious website will have no choice but to become a distributed webapp: part of it running on a server (or server farm) and part of it running as a scripted application in your browser. Sure you’ll still need a grasp of HTML/CSS and their descendants (for a while at least), but you’ll use them in much the same you people do baking nowadays: you make special treats now and then, but you’d hardly ever bake your own bread.

Web 1.0 is dying. It’s passing can be painful, but it doesn’t have to be. Start using content management systems. I would recommend open source systems like Drupal, WordPress and Joomla. If you’re a developer, learn PHP and JavaScript while keeping your HTML/CSS knowledge in fair shape. If you’re building a web application or framework, make it easier for non-browser clients like other webapps to access data and functionality. The easier it is to use your servive, the more people will use it. Twitter and Google Maps seem to do this quite well. Don’t worry about supporting every version or every browser ever built. Pick 2 or 3 modern browsers and make sure versions released in the last year or two work well. Above all, don’t do anything to turn away the early adopters.

It’s time for web 1.0 to die an honorable death. It’s time for the rest of us to move on.