Discovering Dreamweaver

Over the past few weeks, I’ve been doing a considerable amount of webdesign work. First off, there is my personal website, which I’m trying to make more serious and comprehensive. Besides that I’ve also been working to redesign my college Computer Science Department’s website. I have a part time job with the Foreign Language Resource Center at college and they’re launching a new program to give students an ePortfolio where they can store and show information regarding their course work and other language experience at college. Originally the plan was to have some art students design a set of templates, but I was rather appalled by the low quality of their work (and the extremely long time they took to create it) so I decided to just make my own.

For my personal site and the CS department’s site I was using a Emacs with the html-helper mode and manually managing things like links and so on. Though it was very nice to have hotkeys wired to inserting specific tags, some things (like making site-wide changes or changing the structure) were simply too time consuming. I’ve known that Adobe Dreamweaver is pretty much the current gold standard for website design, but I was rather reluctant to use it because I’m not a big fan of WYSIWYG editors. However, the lady in charge of the Foreign Language Resource Center had made it quite clear that no matter how good my template was, she couldn’t use unless she had a Dreamweaver Template file for it and could be edited from Dreamweaver.

I really didn’t need to actually make a new template, because the ePortfolio project would have gone on without me. However, it’s times like this that the hacker inside simply won’t take no for an answer. So I decided to just sit down and learn just about enough Dreamweaver to get the job done.

However, after using Dreamweaver for a few hours and creating a full skeleton of a portfolio in record time, I’ve decided that there is no going back for me. First things first, Dreamweaver fully unites WYSIWYG editing and roll-your-own code editing. I wouldn’t have used it if editing code was difficult or impossible. The instant preview is very handy to show just what is going on without needing to fire up a browser. More importantly the preview lets you see exactly how your code maps to elements on screen, something which is very handy if you are trying to nail down a tricky layout.

I’m still not a fan of using the WYSIWYG mode to build the layout, because I like having an intimate knowledge of what exactly my code is doing. At the same time, I’ve realized that a WYSIWYG mode does come in very handy once the layout is set and all that remains is to put down the content. Auto insertion of tags is nice, but not having to look at tags at all when you’re making your content is even better. That being said, I’ve seen that Dreamweaver has a tendency to insert and prompt for element attributes that I would rather set using CSS. This is a mild bother, but is acceptable in view of other productivity boosts that you get.

Templates and automatic link management are two other features that I’ve come to love. Templates alone are a massive productivity boost, especially since you can set distinct editable and non-editable areas. This lets you define exactly what parts of your pages stay constant across sites and what can change. The larger your site gets, the more important this becomes. On the same note automatic link management is an equally useful feature. Most than once have I found myself rearranging my pages and having to manually go about changing links. I haven’t had a chance to properly utilize this feature, but I intend to do so in the future.

Coming up ahead

I’ve decided that I will be using Dreamweaver as my tool of choice for future web design work. Things I would like to play around in the immediate future would be multiple templates for a site, as well as testing out the site management tools. I’m also going to be migrating my exciting sites to Dreamweaver and will look to see if I can actually develop them faster in Dreamweaver. I know I’ve only scratched the surface of what Dreamweaver can do, but I think it’s going to be a fun experience learning more.

Advertisements

Google and Wikipedia as the gatekeepers of the Internet

In January of last year, I read a post on Coding Horror about how Google was gradually becoming the starting point of the Internet. Though Jeff Atwood’s points were certainly well made and valid, I really didn’t think much of it at the time. In the year and half since then, some things have changed. Google has been moving away from pure search but still using its position as a focal point on the Internet. Chrome and Android both increase Google’s prominence in the computing world. However, I hadn’t quite realized that riding on Google’s pre-eminence is another Internet powerhouse: Wikipedia.

My story begins like this. I was doing research on IBM’s original Personal Computer and how its BIOS was reverse engineered by Compaq to produce clones. I was focusing on the ethical issues of reverse engineering and of course, I turned to Google to find online information sources. Here are some of the search queries I typed in:

  • IBM PC
  • Reverse Engineering
  • Utilitarianism
  • Kant
  • Categorical Imperative
  • Social Contract
  • Compaq

In all but the last search, the first search result was the corresponding Wikipedia article. I found it interesting and somewhat unnerving that even IBM’s own website is the second result when searching for IBM’s most famous product.

The duopoly that is beginning to form is quite interesting. Google is gradually placing itself as the chief filter and navigator for the web. Competitors like Yahoo! and MSN would require a massive, perhaps combined effort to take on Google Search in any serious way. Newcomers like the much-hyped Cuil are simply not good enough.  With Google’s multipronged effort to cut into both business computing (Google Apps) and the common man’s net experience (Chrome and Android), it’s unlikely that this trend will reverse itself anytime soon.

Wikipedia is taking the form of the most easily accessible content provider for an increasing range of not-too-specific information (and a fair amount of specific information too). How many school or college projects today don’t involve Wikipedia in some way? Even though professors and teachers might wholeheartedly (and maybe with good reason) insist that Wikipedia cannot be cited as a reputable source, the fact remains that for many students (and for many other people) research about a topic begins (and in many cases ends) with Wikipedia. Wikipedia is becoming the Wal-Mart of the Internet. Their information may not be great, but it’s good enough and it’s cheap, in terms of money, time and effort.

So What?

Well, perhaps nothing at all. After all Google hasn’t shown any signs of taking over yet. I trust Google enough to give Gmail all my email. Even my college email is routed through Gmail because it’s the most efficient solution out there. I use Google Reader to gather information from around the web. I’m a fairly regular user of Google Calendar and Google Docs. I trust that no human eyes are viewing my information or reading my documents and the software machines running on Google’s server farms do their jobs wells. Their ads are discreet and unobtrusive.

Google makes money. Lot’s of it. Billions of dollars every year. And not just for itself. There are thousands of businesses that make millions off Google Ads and many popular businesses get some 70% of their business via Google search. A lot of Google’s money goes to paying for various open source projects in a number of ways as well as funding university research. I would rather have Google in control of that cash flow than not have it at all.

However, it cannot be denied that Google is quickly and surely becoming the internet. As Jeff Atwood tells us, if your website is not on Google, it might as well not exist. Rich Skrenta, possibly the creator of the first internet virus, is not exaggerating by much when he tells us that the internet is essentially a single point marked G connected to 10 billion destination pages. If we were to follow the more conventional analysis of the Internet as a weighted graph, the weights given to Google’s outgoing links, far outnumber those given to any other (with the possible exception, in some cases, of outgoing links from Wikipedia).

And into this, Wikipedia fits perfectly. In the free world of the Internet, it’s hard for businesses to make money by selling pure information. But pure information is the heart-blood of the internet, it’s first cause for existence. And it’s this information need that is served by Wikipedia. What you can’t buy, you can get for free on Wikipedia (mostly). Wikipedia cannot surive alone. It needs efficient search to make proper use of all its information. In exchange, it acts as a sort of secondary filter: after Google’s search filters away the cruft and deadwood that litters in the internet in the form of spam, porn and obsolete webpages, Wikipedia steps in to provide a mostly reliable core from which to branch off to other points or from which to draw inspiration for more searches. Sure you can decide to not use Wikipedia. And pay the consequent price of having to sort through the mass of knowledge by yourself (though perhaps with Google search by yourself). But would you really want to?

If you want to fight Goliath …

You’ll need more than a slingshot. Google and Wikipedia are both fairly well entrenched in their respective areas. And the tasks you’ll have to accomplish to shake them are certainly Herculean, if not harder. To beat Google, you’ll have to start with an equally good algorithm. New competitors like Cuil aren’t all that bad, but they’re not good enough. With the rise of rich media, your search engine will have to be able to get to pictures, videos perhaps even Internet radio stations. Of course, now that Google has branched away from search, you’ll have to take that into account, or at least team with someone who can. People are more comfortable using a unified interface and a single way of doing things than a bunch of smaller ones. Google still has some work to do in that area. After that you need to have a way to get people money. Breaking into Google’s Ad empire might be harder than making a dent in the search market.

As for Wikipedia, you would need to find a way to collect a vast amount of information on a variety and also keep it up to date. That’s hard to do and expensive with a proprietary model and with an open system, there are problems with abusing the system. Then there’s the question of actually getting people to use your resource. Giving it away for free is no longer good enough. You’ll have to offer something that Wikipedia doesn’t. And Wikipedia offers a lot.

Neither task is for small players. So who could do it? Someone with deep pockets for one thing. The battle for the internet isn’t going to be over in a flash, it’ll be a long protracted war lasting years (if it’s ever fought, that is). Talking about Flash, Adobe and the Flash platform are also another strong player in the arena, though in a slightly different way. Adobe and Google have mostly non-overlapping interests. However a partnership between Adobe and another strong player, such as Yahoo!, Microsoft or Amazon might just tip the balance. A combination of online software built with Flash running on backends from Yahoo! or Microsoft would be a serious contender to Google’s AJAX web platforms. At the same time, it might be more beneficial for Adobe to join hands with Google, especially since Flash is already YouTube’s backbone. Tightly integrating Flash with Chrome might cement Flash’s position as the rich content platform of the Internet (with Google Ads thrown in the mix).

Of course, I’m probably getting far ahead of myself. Any serious competition to Google would involve a concerted effort by a number of interests over an extended period. That seems unlikely to happen with the current mess of competing interests, standards and technologies. For the time being at least, the Internet is still a point labeled G. The 10 billion connections are purely coincidental.