20 Years Ago Today

One day in the Spring of 1995, just a few months before I finished my undergrad degree a friend in the student computer lab leaned over my machine and said “check this out.” He double-clicked the NCSA Mosaic icon on the desktop and showed me the World Wide Web for the first time.

It wasn’t much to see and I wasn’t impressed. I’d heard so much hype aboutThe Information Superhighway and this was… this was all there was to it? We went through a bunch of random sports news he viewed every day and none of that clicked for me. Sensing my lack of excitement, he continued. We kept looking at random sites until eventually he showed me the David Letterman Top 10 Archive and it just about blew my mind. A college student was posting whatever Top 10 list Dave used on the previous night’s show to a giant long page on their college account. As a comedy nerd I loved Letterman but couldn’t catch every episode of Dave’s show while busy with school, so I found this to be an incredible resource. I was immediately hooked when I realized it was just some kid in some random college publishing whatever they liked and I could find it and enjoy it for free, every day going forward. I was hooked.

A few months later I graduated, but I stayed at the same university to start a Master’s program. I bought my own home computer and spent every spare moment reading the web, while also working in my advisor’s lab analyzing samples. We had a lot of downtime between sample analysis, so I could surf the web while I waited for results. By that Fall, I began a new research project while continuing to devour the web in my free time. Eventually one day I figured it was time for me to be part of this—I wanted build my own web pages instead of just reading them all day. I couched this to my graduate advisor as a way of promoting our work and publications to the greater world. He’d already been dabbling in it and gave me the green light to learn how to publish our research online.


To give you an idea of how long ago this was, I went into a Waldenbooks in a mall to buy a book on HTML. I’d dabbled in Justin Hall’s Publish Yo Self section and other online how-to guides but I knew having everything from soup-to-nuts laid out in a book in front of me would be a better tool to learn from—plus I was a college student used to paying too much for books.

It was Christmas Eve, 1995, and while the store was busy, I scoured the shelves and eventually got my choices down to two books on publishing HTML. One was about writing HTML in Microsoft Word and even then I could tell it sounded like a bad idea. Instead, I grabbed Creating Your Own Netscape Pages by Andy Shafran, which covered all aspects of HTML in plain simple text and the only helper apps mentioned were a text editor called Hotdog and an image editor called Paint Shop Pro. I bought the book.


Being that I was 23 and in college, I didn’t have much money to give gifts while simultaneously being too old to get fun gifts anymore, so I had a fairly boring and uneventful Christmas at home with my parents. That night, I was having trouble getting some sleep. At around 1am, I realized I couldn’t sleepat all, and then an idea hit me like a bolt of lightning. I decided to grab the book I bought a couple days before, and read it. And not just read it, but really read it.

That night I did two unique things I’ve never repeated. I read an entire technology book cover to cover, not skipping a single page or reading anything out of order, and I read it all in one sitting, straight through, overnight.

I sat in front of my computer, opened the book at 1am, and kept reading while occasionally typing things into a text editor. I picked out images and tweaked them in Paint Shop Pro. I learned how font sizes and lists and custom bullets worked, and I wrote down everything I wanted to see on my own page. I typed up a little bio and a list of links to stuff I enjoyed. I found a web page counter and copied the appropriate code to my page. I immediately fell in love with the BLINK and HR tags and I couldn’t get enough of having giant borders on things. I was building a cool page that described what I did and liked to do, and figured the world would be impressed by my eclectic collection of links (kind of like every person in college that used the word eclectic to describe their own music collection, and how impressed everyone was supposed to be by it all).

At 7am on December 26, 1995, as the sun was ready to come up, I was finally finished with both the book and my web page, so I uploaded it to my college web server. I nervously opened up the URL in a browser and much to my surprise it worked, and it looked exactly how I pictured it would (This was one of the very few times something worked the first time). I was stoked. It was incredible—those obscure instructions I wrote down in a text editor actually made that colorful page. Holy shit, I actually made this. I finally went to sleep an hour later.


Tonight in 2015, on the anniversary of that day, I dug through tons of old hard drive backups, and the closest thing I could find was a version of that same first page from roughly 8 months after that night along with most of my personal pages from 1997 right before I bought my own haughey.com domain. The copy of my homepage is linked here:


That morning I knew I’d found something incredible in learning to publish online. While I had finished a couple science degrees and was working on another, I started school as an art major and I really loved how the early Web married art and technology in ways I’d never seen before. For the first time I felt like I was using both sides of my brain simultaneously and I knew building websites would become my thing someday.

A few months later, I considered quitting my Masters program and striking out on my own to build web sites, but instead I stuck it out at school, and finished my thesis and my degree. Unsurprisingly, my first freelance gig post-graduation was building a website for my department and all its faculty, about 50 pages in all over the course of a couple months. My first real full-time job was shortly after, at an environmental engineering firm making copies, pushing pencils, and writing environmental impact reports for cellphone poles being erected all over Southern California. After years of working in a chemistry wet lab analyzing samples, I hated having a desk job doing paperwork and quickly started looking for a web design job instead, which I found at UCLA in December of 1997.


It wasn’t easy to walk away from basically seven years of college education focused on environmental science to instead start working as a web designer. But I felt it in my gut the moment I stepped into the offices of a computer group at UCLA — this was where I belonged and I needed to drop everything to come here. If I didn’t get the job I interviewed for, I would do everything to find another one like it. And it didn’t feel like quitting Science or quitting anything, but instead like moving to a place I was supposed to be all along, opening a new chapter in my life. Thankfully, I got that job and things went well there and at every other job after. Tonight, 20 years later, I can fondly remember that night with the book, and how amazed I was that first time I loaded my very own web page in a browser and it all worked correctly. Ideas from my brain down jotted down into these obscure instructions, which finally rendered on a screen for anyone in the world to see.

Today, I’m glad I got that book and stayed up all night reading it 20 years ago. Here’s to 20, 40, and hopefully 60 more years of doing the exact same thing and feeling similarly amazed by it all.

Thoughts surrounding Google Reader’s demise

First off, I'm sad to see Google Reader is closing up soon (why so soon when other Google apps came with 12-18 months of notice?). I know some people that developed and worked on the product and to this day I use it several times a day to keep up on a few hundred blogs I follow (as well as weirder feed things like like recent comments in specific posts I'm interested in, obscure search results at ebay for items I'm tracking, and of course, mentions of my name or sites across blogs). I use the service almost as much as I use Twitter and it wasn't easy news to take, since I thought it'd always be around like water or electricity, run by the largest technology company on earth. Now I'm left second guessing using any Google product that doesn't clearly carry advertising on it, knowing the plug can be pulled at any time. I thought I'd write up some thoughts below and some quick reviews of alternates in the hopes others in the same boat can figure out what to do next.

Why is RSS interesting?

I admit the world of RSS is a pretty geeky circle to run in (if you know what RSS stands for, you're officially in the club). You've got a mix of web technologists, nerds, and news junkies that are all so busy that they no longer want to browse the web, they'd rather check a stream of updates that were fetched for them. RSS is basically TiVo for the web, and like TiVo in say, the year 1999, only the hardest core nerds are interested in it. Most web users love it and find it useful once you explain how it works (sites publish a file that gets periodically checked and fetched, to be reposted in your client of choice for reading updates) but like TiVo, it's a huge hurdle to get over, to explain to people why this technology is worth it and saves so much time.

Why should anyone care about Google Reader?

Google Reader was the best of breed. It started around 2005 and became one of the first few web-based services for reading RSS. Up until then most people used a desktop app to read RSS feeds from sites, but I personally liked the flexibility of using a web browser on any computer to stay up to date on what I'd read (it's a lot like the old days of POP email, if your unread counts get out of sync across devices, email was harder to use). Around 2007, Google Reader started adding more features and getting easier to use, by 2010 Google Reader was getting fairly amazing, notifying you of new posts within seconds of them going up (relying on the global network of Googlebots scouring the web) and being able to provide feeds for pages without RSS

My favorite time is around mid-2011, by then Google Reader was fast, easy to use, reliable, available on my mobile devices natively in a browser or also in a client like Reeder, and there was also a hidden social network of other news junkies and nerds. You could share items to the public or just your friends, you could comment on articles just among friends you'd connected with. I used to follow random people I didn't know in any other context but knew them through their amazing shared items. Most all the social stuff was stripped in late 2011 to make way for Google+ share buttons, but they didn't work the same and took your shared items away from Reader into an entirely other site so few people used them.

Google Reader announcing they are going away soon is a huge problem. It means the loss of a beloved app for a lot of nerds and news junkies, including a great number of journalists, not only those working in the technology field. It means a lot of tiny blogs won't get noticed as easily if we won't be able to easily monitor infrequently updated blogs written by experts. I'm convinced we'll see some effects of this closure on journalism, until writers scramble to find alternate ways to monitor thousands of contacts and researchers writing online.

While Google was innovating on Reader from 2005-2013, pretty much every competitor slipped away. Desktop clients were waning, web-based locally hosted readers gave up development as Google surpassed what a few small developers worth of resources could create and eventually many apps simply tied into a centralized Feed API Google launched so you could basically use Google Reader in different clients and interfaces, always keeping your sync/read numbers correct. Recently I noticed quite of lot of filtering has found its way into Google Reader, where I'm only presented with new posts from my most favorite sources at the top when my unread counts are high, which is a nice touch and points to some interesting AI happening in the background to figure those out for me.

The thorny problems of writing your own

Everyone I know is scrambling for alternative services and there are a handful around and many more being built. Seeing these new smaller outfits with their servers being slammed by a few thousand new users indicates just how big and reliable the Google infrastructure behind Reader is. There are a lot of thorny issues to solve for anyone planning to make a successor to Google Reader:

1. The update bot – Google had the advantage of having not only thousands of server farms across the world but many thousands of bots running constantly across millions of sites every day checking for updates. Building a bot isn't the hardest thing in the world, but building one that can quickly scan through hundreds of thousands of sites a day is non-trivial and is a major endeavor. Keeping one running is more than a full time job.

2. Feed APIs – The central brain of any RSS reading app is often available via programming interfaces so your UI can stay in sync with your website view and mobile apps. A lot of current RSS readers rely on the Google Feed API that is likely going away, so it'll be a fairly big project for anyone to rebuild this for their app. I have heard talk of people trying to share resources here, attempting to make a centralized service others can use, but I don't have high hopes of that coming together in the very short time frame we have.

3. Search – I recall someone formerly at Google once telling me that providing custom search across all your feeds was a huge undertaking that basically requires a service to keep a copy of every blog post in every blog ever tracked in the entire system, and provide that indefinitely. I don't use search much in Google Reader but I hear that's a killer feature for many others. The feature obviously ramps up your storage needs for any project.

4. Economics – The toughest problem to solve is in the end, how many people would pay for building and maintaining a service? How many users did Google Reader ever have, and what small number would pay someone else to try and make something as good? This is the tough part and beyond a few thousand nerds, I'm not sure if you can convince a larger casual web audience that your product is worth paying for. A lot of outfits are trying to create magazine-like applications that suggest interesting articles from their system and that may be the way to lure the "normals" to a news reading service, but it's tough to say even after building the immense hardware and software required to run a reader-type app, if it's possible to support more than a tiny team of 2-3 programmers on the revenue from users. That said, I'm actually wary of RSS reading apps that don't charge. I want anything replacing Google Reader to stick around.

Quick reviews of existing readers

Since the announcement, I've been playing with alternatives to Google Reader. I didn't try out any desktop or self-installed applications since I move from computer to computer fairly often and need everything to be centralized and web-based. Here's some quick thoughts on each service currently out there:

Feedbin: This is a nice simple reader interface, clean and doesn't get in the way. I'd describe it as feeling like 2007-era Google Reader before they added social features to the app. It costs $12/yr which is good to hear, but so far I've found myself having to click every headline to see a post, as it doesn't seem to offer the low-friction "river of news" showing all new posts from all the blogs you follow in a single stream that automatically marks themselves as read as you scroll. This also required an uploaded backup of my Google Reader blog list, causing it to think every single blog I follow had all new items. This meant I had to hit "mark as read" for all and start over.

Newsblur: A popular suggested service to me was this one, which is normally free up until I think 100 feeds then it is $24/yr and I saw a $36/yr option for the heaviest users. This service is slammed and took me a day to even sign up, but once on their development server, I was really happy with this. I could import my blogs directly from Google Reader and it maintained read/unread status for me. The feature set is really close to maybe 2010-era Google Reader, with a social component including sharing and comments from friends, but I also noticed comments on posts from random readers which could be kind of annoying. There are some attempts to filter items towards stuff you like most, but so far this one is the big champ for a reader replacement.

Feedly: Feedly is slick looking, but annoying in ways. It requires the use of a Chrome extension in my browser that also inserts a little ghosted share icon/feature into the lower right corner of every web page you view, which bugged me. The default views are trying to look like the Flipboard iPad app, but you can get a Reader-like view if you jump deeper into it. I gave up on this shortly after I imported my blogs from Google Reader as it seemed the service was also built on the Google Feed API and would need to transition off that soon. The service seems free so I'd be wary of jumping on board long-term as a replacement.

The Old Reader: This is something I dabbled with last year and coming back to it again I noticed it's pretty close to the way Google Reader looked and worked in 2010-11. They built it to bring back the sharing and commenting aspects, but the service is fairly slow since the Google Reader announcement and I didn't notice a way to pay for an account, so I'm not sure what the future prospects are for it.

There are a lot more options out there and since I tweeted what was essentially meant as a "I volunteer as Tribute to help build a new RSS reader"  I've heard from another half-dozen or so developers and companies working on a Google Reader replacement, so expect to see many more options soon in the coming months.

“Why would anyone send me this?” — Aunt Fannie

I just had that thing happen with an online store, where if you once bought a gift item for someone, and had it shipped to them, all subsequent orders you place for yourself (even years later) end up being sent to your old giftee, instead of yourself.

This has happened to me several times over the past few years and every time I've gone back to double- and triple-check that only my address appears anywhere in the order forms, and still, the items end up on the doorstep of a long lost friend or family member instead of my own.

It feels like someone, somewhere made a bad programming decision ten years ago as a shortcut — why look up the order details from the most recent order when you can just automatically assume it is going to the last address on file? — and we're all paying the price of that decision today.

Build this: Visual TiVo for my computer

I spend hours everyday on my main desktop computer, and I come across bugs in my own code and others' code often. Today as I was trying to help a friend move a blog off wordpress.com, I swear I saw a status screen change an important value when an unrelated setting was changed. It was a showstopper bug but I could not reproduce it. I couldn't switch Firefox into offline mode to see it, since it already reloaded the page to show the new value on the status screen.

At that point I wished for something similar to Time Machine, but for not only what URLs I had previously loaded (already in browser history), but what the pages I'd viewed actually looked like. Even if I was crazy and the values didn't change, it would have been nice to look back and make sure some other action changed the setting. I realized it's not just websites that this might come handy for, but for any application running on my desktop.

So here is my idea: Build an app that takes a screenshot of my entire desktop every 5 seconds silently in the background. At any point I want to look back and figure out how I caused a bug in my application, I'd launch this automatic screenshotting background app and it would assemble a quicktime movie of every desktop screenshot taken in the last hour. That's exactly 720 total images, so playback at 24 frames per second would give you a 30 second movie of your last hour of using a computer in a tidy little movie.

As a programmer and designer myself, I know finding bugs is hard enough in my own stuff, but reproducing them for other programmers is much harder. Something like this kind of application could really come in handy — if you couldn't figure out how exactly to reproduce the bug, at least you'd have a nice little video of the bug in action to show a developer and visual proof of the results.

The unfortunate mainstreaming of internet douchebaggery

Today someone spammed MetaFilter on behalf of Conde Nast publications, and it pissed me off way more than the average occasional spammy self-promoter on MeFi. We have a strict rule at MeFi (since there's no editorial vetting upfront) that you can't post about your own stuff, you have to make posts to interesting random stuff you found on your own. Unfortunately, that doesn't matter to the douchebags intent on ruining the web for everyone else with search engine gaming, as long as they benefit their clients, so we end up having to delete these keyword-laden posts that feature over the top fake testimonials about sites they "found" when they really worked for them.

What pissed me off today was seeing a normally reputable outfit like Conde Nast stooping to hiring a dodgy firm that employs such lame spammy activities. I know the response from Conde Nast or the spammy SEO company will be the same I've heard a thousand times: "It was one rogue employee" or "We didn't know the firm would employ such tactics." I heard the same thing when the Times (UK) was found spamming social sites earlier this year.

The point that seemed to be lost in the Times story was that a cornerstone of journalism that had been publishing for hundreds of years would stoop to such lame-brained antics. You'd think that someone higher up at a place like that would think maybe getting a couple percent more advertising revenue by ethically shady means wasn't worth jeopardizing the reputation or position of a 223 year old newspaper — that institutions with a long-term vision shouldn't be interested in a quick buck by any means possible.

It's a bummer to see Conde Nast hiring someone to "optimize" search engines for them (where "optimize" means spam the web and generally make social sites and tools less useful for everyone in the hopes they do better for certain key search phrases) but given the way the economy is going and where it is headed, I suspect we'll see a lot more big name outfits and longstanding institutions making these same mistakes and resorting to problematic methods of increasing their bottom line, and frankly it sucks for everyone involved. It sucks for anyone using the web and wanting decent honest search results based on real quality of information (not just the information promoted by self-interested parties). It sucks to see industry leaders with dozens or even hundreds of years of successful business think this is a sensible approach to the web. Finally, it sucks to see some chucklehead get paid to spam websites in ways that are becoming so normal that people think this is something every business should do.

This is broken too: Threadless shopping cart logins

Threadless is my favorite place to buy t-shirts, period (this includes any offline stores). I’ve bought dozens and dozens of them and I even subscribed to the shirt of the month club for a year, but every time I make the mistake of throwing a few shirts in my virtual cart and then remembering to login afterwards, I lose all my previous selections. I buy shirts there every couple months and in between each visit I often forget about this bug in the long lost hopes someone fixes it. When I got hit with it for the millionth time tonight, I took a few quick screen captures to demonstrate the problem.

Here is video of the shopping cart failure

The first 30 second bit is me adding a shirt to my cart, continuing to shop, then logging in and trying to check out, but my cart turns up empty. Not good.

The second shorter bit is after I add a shirt to my cart, go to check out, then remember I should login to grab my saved address/credit card/etc info, but as you can see it clears out the cart. Oftentimes I lose 15 minutes of shopping time picking out just the right shirts in my size because the cart clears out every time upon login. Then I have to try and remember all the designs I liked and put them back in the cart (often I just quit and shop the next time they send me a ‘new releases’ email)

Threadless, I love you guys to death but I’ve encountered this bug for about two years and would love love love it if you fixed it with some cookie/session storage of shirt selections so I don’t lose my cart upon login

(why login? if you don’t login, it basically creates a new threadless account with your exact same details and there was a time I was subscribed to their mailing list three times under different “accounts”).

update: by the power of greyskull, this has been fixed!

Eleven

After upgrading my first mac (powerbook) to another powerbook, then to an iMac and finally to a Mac Pro, I realized five years of using the Migration Assistant had finally run its course. Various basic parts (mostly Keychain Access) of Leopard stopped functioning properly and since everything ran great on my new Macbook Air, I decided it was time to backup, format, and reinstall fresh on my main Mac Pro.

A few hours after upgrading I installed Firefox and my most often used apps like Transmit and Textmate. Every few days I realized I needed one more app so I’d download and install it. After a week or so, I was pretty much done reinstalling.

Last year I wrote about doing as much as possible using online apps and how I found it really handy, so today I looked at my Applications folder to see how many things I’ve installed aside from the default Mac apps. I counted 11 applications total outside of iLife and iWork. It includes a couple proprietary things I need for installed hardware (like the wonderful ScanSnap) but it’s mostly the basics (Firefox, Transmit, etc) for doing my everyday work tending MetaFilter.

The thing that surprises me is that I reformatted my computer about six weeks ago, and I haven’t felt like anything is missing since. Thanks to a combination of almost all my work being done online and the great set of built-in functionality of OSX, I can get by on an almost completely clean system.

Ten years ago I had literally hundreds of apps on my Windows box, and I feel like I was constantly needing more.

Ads good! No ads better!

If you’ve followed this site for a few years, you probably saw my old essays introducing Google’s Adsense to the blogging public and that time I said ads in RSS were a no-no. Today I wrote an extensive update on the same subject over on my new blog: How ads really work (superfans and noobs). I basically lay out everything I’ve learned from hosting ads for the past five years including some data from my own sites and those of several friends.

My new site: fortuitous

leaf logo When I came back from Austin, I mentioned that I wanted to do a new site focused on business type advice. After a month or so of the idea gelling in my head, I wrote down about 30 ideas for essays I’d like to write, I banged out a mockup, and I looked up a bunch of goofy domains. A couple more weeks passed and thanks to the CSS coding of Ryan Gantz, editing skills of Anil Dash, and the nice fellow that sold me the domain cheap, I give you: fortuitous.

It’s a new essay every Monday about some aspect of business that I’ve learned while running the MetaFilter/PVRblog/etc empire. Nothing too earth shattering, but it’s a fun outlet and I think it’ll help a lot of people in a similar situation out. Subscribe to the feed and follow along.

(btw, the design of the bottom frame CSS hack thing is totally cribbed from NorthTemple and it does display funny if you scroll your mousewheel like mad. It was also the first thing I’ve ever built using Coda as the IDE and it was fantastic, with a little more polish/features it’ll replace Textmate as my editor of choice)