Monday, December 15, 2008

KillerNIC: trend or OS "epic fail"?

I was following a friend's twitter the other day that turned me on to a product called KillerNIC.

This is a high-end gaming network card that advertises superior network performance in games. At first glance, I think there's nothing to this, after all, network interfaces have become commodity items for years now -- hardly anyone buys a dedicated NIC card that isn't part of a motherboard or laptop these days.

But then I come across the striking claim: "Completely bypasses the Windows Network Stack"

Wha-what?! Ok, part of me thinks this is really cool. Maybe its the beginning of hardware optimization of parts of the stack that were previously tightly integrated with a particular OS. DMA and asynchronous bus access is the future! Building-block integration stacks maybe?

But another part of me says "wow, windows has really epic failed to get to this point." Let's see, it was cheaper and more effective to create separate hardware and completely bypass Windows Network stack than it was to deal with the cruft of 10 years and bad plug and play interrupt models? Having been a Win32 developer... yeah, actually I can totally believe that.

Apparently this card also reigns in the "spoiled child" model of modern application behavior. Now when McAfee or Windows Update "calls home" for updates in the middle of your game, you won't be disconnected. The card can be configured to give maximum priority to your current focus application and idle-priority other background apps.

This points to another epic failure of application developers. It's gotten so bad that we simply can't trust you any more to do the right thing, so we're taking away your network privileges at the hardware level. Ouch! Once again, I can totally believe that.

Sunday, November 2, 2008

Magpie: monetize your friends and family...

Just ran across an article about Magpie on Mashable today. Magpie is basically a way to sell your friends down the river -- or in more marketing speak: "a way to monetize the popularity of your social network."

Maybe this sounds like a good idea to some bright people in marketing -- after all, it's viral, it has presence and it's easy. But why is a social network so popular to begin with?

What makes a social network work?

Let's start with the most basic social network of all: your family.

Now imagine that you show up to Thanksgiving dinner, but instead of hugging your mom, you tell her that she really should check out [brandname clothing], and that [brandname gravy] is much better than her homemade, and everyone should use [brandname toiletpaper] after the game, (brought to you by [brandname], of course).

By the way, after you tell your family all this, you mention sheepishly that you weren't saying these things because you really cared about them, but because you're getting $0.12 per impression.

Of course, your family (I hope) loves you, so maybe they'll put up with this, like so many families put up with Amway, candy-bar sales, etc. They'll think "oh boy, I hope this is just a fad"... if it continues for a while, you might find yourself getting fewer invitations to Turkey day each year.


What's the lesson?

If you take a shared resource like a popular social network and exploit it, you may end up weakening it or destroying it (i.e. Tragedy of the Commons). This kind of exploitation has already had a chilling effect on e-mail, instant-messaging, and blogging channels.


But there's a better way, if you really understand the value of social networks...

Corporations are already treated by the law as virtual people. Why don't you simply act like virtual people and start your own social networks instead of co-opting ours? If you are really cool/hip/interesting "people", then we will follow you! We'll tell our friends about you and they'll follow you too. You'll have your own community to say anything you want!

And you won't be pretending, faking, or buying your friends.

Saturday, November 1, 2008

The dark side of subscriptions...

I decided to do a little financial house-cleaning today and cancel any services that I don't really use anymore. It's not that they were bad services, or even malware scams, they are legitimate services from Fortune 500 providers... I just don't need them right now.

I signed up for them originally, because it was easy, I could do it online, it was fast, and I wanted that service. But as I found out today, sometimes they are virtually impossible to cancel once you've signed up for them.


Obstacle #1: How do I cancel my service?

Although the subscription buttons are easily accessible, it's hard to find a "Cancel Service" button anywhere on the account pages or in the company support pages. I found a cryptic blurb giving a phone number or an address to send a written request to.

I noticed that the cryptic phone number was different from the main service number. It had different restricted service hours (for example, only 10-4 M-TH, PST only) than the "purchase" side of the business which had 24 hour sales support.

Obstacle #2: Automated Call Centers

Many companies claim to have wonderful customer service records. However, "cancellation" was buried in the voice mail of this company, making it very difficult to navigate to the correct option. I was disconnected repeatedly while trying to cancel their service. Each time, I navigated painstakingly through a slightly different path in the system, was put on hold for 5-10 minutes, sometimes contacted by a service rep who told me I'd be forwarded to the "correct department" and then simply disconnected.

What kind of customer service is this?

Obstacle #3: Customer Service

Finally I got through to someone, who kept me in call no less than 30 minutes. 30 MINUTES! We went through all the options they offered, all my reasons for not continuing the service. I had to repeatedly defend my reasons for wanting to cancel the subscription. Why? Why should they even ask as a condition of "granting" my request?

In the 90's we saw the rise of the corporate-slave worker. Are we now seeing the rise of the corporate-slave customer? Do you have no rights to stop paying for a service you don't use!?


This is ridiculous and it has to stop. So here's my challenge to the industry:

If you make a subscription service easy to join, make it equally easy to cancel.

Otherwise you aren't being honest about your business model, you're not really keeping customers by offering a wonderful service that they use, you're merely keeping them by way of a corporate-protection racket.

Going forward, I refuse to sign up for any more subscription services unless they meet this simple criteria.

Monday, October 13, 2008

no-risk investments...

I woke up this morning to an NPR segment featuring a UK professor calling for no less than a revocation of the 1997 Nobel prize in economics regarding option pricing formulae.  At first I thought he was a bit extreme... now I'm not so sure.

The Black-Scholes method states a formula for certain strict assumptions that describes how to build a "risk-free" portfolio.  Since this formula has been adopted by the Financial Accounting Services Board and widely used throughout the industry for predictions of options pricing it has garnered a fair amount of negative attention with the recent world economic crisis in the news.  But is this unfair attention?

This news woke me up.  I feel as though I must be missing something, like I've slept through the past decade of hedge funds, etc. never really understanding the mechanism... but I understand enough math now to at least comment on the apparently disasterous use of these ideas.  Maybe I'm missing something, and a specialist can clarify why it's not really as bad as it sounds...

So, as I understand it, the basic mechanism is a clever way of observing the derivative of the valuation such that a hedge is perfectly placed so that either way the stock goes, the option will compensate and result in a magically "risk-free" portfolio.  Obviously, this is huge news for anyone in Wall Street, since there is always substantial risk associated with every transaction.

Now, there are some known shortcomings with this model that economists were well aware of.  For one, it makes certain strict assumptions about drift and volatility, namely that they are constant.  Volatility is the more important concept here, because while it couldn't be known directly, there was a clever way of flipping the Black-Scholes around so that you could solve for the actual volatility given historical data.  So quickly economists found that this value was anything but constant in the real world.  In fact, they talk of volatility-spaces as a case of solution spaces that can have many shapes depending on the type of investment.  But these are simple statistical generalizations -- there is no underlying theory of the shape of these spaces!! (at least not yet)

So, what we have is a system that allows us to concretely know the shape of the space AFTER it has happened, but has zero-predictive power to tell us what will happen.  Now, economists apparently desparately wanted this space to have at least local-linearity, but keep in mind the essence of the space is unknown and is at least indirectly related to a geometric brownian factor (i.e. random!) -- so it's extremely unlikely that any linearity exists in the first place... but it's safer to say that we simply don't know what it is.

Now what mathematician in their right mind would suggest that this was anything more than simply shuffling one unknown into another variable and then claim that at least it is "somewhat useful as a first order approximation which can be adjusted"?  

Are you kidding me?  Based on what?  Based on local linearity assumptions and cooperative historical data of the first few years?!?  The space is unknown man!  What made them think they could "adjust" it in any way predictively that made sense?

But it didn't even work that well when it was first introduced.  Look at Merton's first board position at a hedge company that failed so spectacularly in 1998 that again the Fed was concerned about bailing out the industry.  How did I miss that towards the end of the dot com bubble?!  4.6 billion dollars in four months!!  (of course back then I'd never even heard of a hedge fund).

Yet the Financial Accounting Services Board adopted this method of calculating options value and now it's widespread use is implicated in the worldwide economic crisis you see before you.

Can someone please explain how this made any sense whatsoever?  



[update 10/18]: Paul Wilmott's blog article in defense of Black-Scholes is an interesting read -- he's someone who does this for a living and so he has a much deeper view on it than I do certainly -- (it's a reference backing the wikipedia article and also provides additional references to check out)  

He points out that Black-Scholes is very robust on the averages in spite of the obviously poor assumptions of the model, and therefore is a relatively simple model to use in practice (i.e. an industry workhorse even though it's not perfect).  There are more complicated models but not a lot of evidence that they work any better.  He asks why use more complicated models when the fat-tail isn't that important all the time?

Well, what about when it is?  

Such as when the risk suddenly becomes very high market wide and the correction deltas are huge?  Now you have a function that you assume is locally-linear (because of it's physics origins) -- but you don't.  I think Black-Scholes might only be a proper "industry workhorse" when volatility is relatively constant.

It failed massively in 1998, and it failed massively now.  Is it coincidence that the market was experiencing wide economic volatility at both those points in time?

I still have a big question about how worst-case scenarios were investigated on this model?  If a handful of investments with high-volatility were considered in the context of stable market data, Black-Scholes might behave acceptably well on the averages.  But was a "total market meltdown" considered closely?

Thursday, September 25, 2008

a scary-deep idea...

I stumbled across this very strange idea the other day while talking to a friend about science and religion... it's related to the post I made earlier about quantum observers and straightening out what it actually means to be an "observer" creating reality.

Assuming that "observers" can create reality (and this is what the SEED article suggests has been shown experimentally over and over again) -- I raised one of my biggest conceptual problems with the idea: given multiple observers, how do we all "agree" to create the same reality?  If we don't create the same (or at least similar) reality, then how can we communicate about it at all?  More to the point, how come the observers seem unable to shape reality by their will?

She took up this last point by referring to a modern theologist's view that we also can't walk on water because every cell in our body would have to believe it was true in order to succeed...

At first I thought how silly, but then maybe there's something to that... what if it wasn't just every cell in your body, but every cell, every rock, everything?  I thought specifically of something I read of Feynman's talking about another aspect of quantum physics: that there is really no separation, no distinction at that level between particles, rocks, plants, people... stars... it's all part of the same oneness.

Then I thought of something really strange... wouldn't it be funny if all the experiments and the maths weren't wrong, just our narrow interpretations of the meaning?  What if everything in quantum physics was pointing to the idea that there was and had only ever been One Observer infinitely observing Himself unfold?

...I paused and thought of the imagery of Buddhism, lotus petals and Mandlebrots infinitely shimmering in space and time...

Science can only take you so far, then you must dream...

Wednesday, September 24, 2008

history, math and misery...

Oh, so let me tell you about my history class.. I had a weird revelation today.  As we were going through all these philosophers and statisticians views on the industrial revolution, I began to think, "wow, all these horrible living conditions, etc. were the result of really poor models of business."  The curious thing is that many of them were (and still are) mathematical models!

For example, with tons of surplus labor, the early industrial businessman seemed able to approximate labor as (people x hours) = value.  As each person cost money in wages, they desired to keep that side low, while simply scaling the hours to 12 or 14 per day (children too).  As time went on, they slowly realized that worker efficiency diminished past a point the more hours worked... so "counter-intuitive" to the earlier model, they had to factor in shorter work days with more breaks.

Another camp believed that you should only pay workers subsistence wages -- because if given more, they would spend it on "recreation" or "procreation" (ha!).  Population was viewed as a fundamental problem because food was determined to rise arithmetically, while unconstrained population grew geometrically (the simple models of the time) -- hence the ideal was to keep the population low by keeping the poor working so hard they had no time for anything else!

Unbelievable!  

But something about all this suddenly struck me as a set of problems that has been undergoing successive refinement over the generations.  For example, Nash's equilibrium is a drastic improvement on Adam Smith that more closely approximates investor behavior... and in so doing, determined the behavior of a whole new generation of economists.

Business always looks so random at the low level, but high-up there is a definite pattern of improvement after horrible costs and destruction -- I hate to think it, but historically speaking, perhaps pain is the best teacher after all?  It's startling to think of this like some big genetic algorithm that is slowly crawling through the optimal ways to run business and increase production.  Still, it's also very exciting, because it means the better the mathematical models, perhaps the less suffering we need before we learn...

Through better math, perhaps we can save the world.

Thursday, September 18, 2008

daydreams on a strange loop...

I came up with these fanciful meanderings after reading Godel, Escher, Bach by Hofstadter a few years ago... I was thinking about it again today so I dug it out of some old linear algebra class notes.  It's fairly crazy in the details, but the "dance" of it is fun... don't take it too seriously, unless you take it as a warning of the dangers of sleep deprevation mixed with studies in linear algebra...  

-lk



The fundemental problem that Godel's Incompleteness Theorem describes is that any untrivial formal system is either incomplete or inconsistent.  However, a super-system may be built around the first system that does completely and consistently describe it!  The trick is, you have to go "outside" the system you want to describe in order to fully describe it (Hofstatder calls this a "strange loop").

The reason this isn't a terribly useful solution is that the super-system itself can be shown to be incomplete or inconsistent.  In fact, no matter how many systems you layer on top of each other to fix things, there is always one outermost layer that suffers from this problem.

So, combining two other things I read about (Godel numbering and Hilbert spaces) :
What if it were possible to somehow compose an infinite number of formal systems such that they converge on a finite system?  Could the outer "error" (required to describe inner systems completely) diminish wrt the total in a manner analogous to a limit?  Could the limit then approach a system that is both complete and consistent in spite of Godel's incompleteness theorem?
(ED: this suddenly reminded me of a recent article I read about Negative Databases for some reason...)

Godel numbering allows any theory or formal system to be represented in mathematical form. So, it follows that there must exist a set of Godel numbers that correspond to math in Hilbert spaces.  

Hilbert spaces are very interesting, because even though points in such spaces have an infinite number of coordinates, angles and distances measured between any such points are finite!  Weird huh?  (common applications are quantum physics, signal processing, fourier transforms, etc.)

(ED: heh, imagining extra dimensions reminds me of that great short story "Dreams in the Witch House" by H.P. Lovecraft... creepy!)

Since points in these spaces are vectors with an infinite number of coordinates, it seems reasonable that each point vector in this space could be interpreted as a Godel numbering representing a potential formal system.

Since there are an infinite number of points, there must be an infinite number of formal systems being represented... yet, it's possible to define a line of finite measure from the origin of this space to any coordinate point given an angle and a distance (using polar coordinates).  

A line represents an infinite set of points... does this imply that a finite Godel numbering can somehow represent an infinite set of formal systems?  

(ED: woah woah woah Tex, this is quite a leap...)

This raises even more questions:
What's the the representation of the differential term used in the limit?  
Does the differential actually converge towards a finite value?

(ED: it just kind of ends here... I probably wised up by this point.  Sorry, no breakthroughs!)


Saturday, September 13, 2008

No REST for Web Apps...

[update 9/20/08: actually, the O'Reily book "RESTful Web Services" does a great job of distinguishing 3 categories of services: REST-ful resource-oriented, RPC and REST-RPC hybrids.  The last category is very carefully considered.  The other thing that I really like about this book is the author's "whale-fish" simile when talking about superficial details like technologies used vs. architectural underpinnings.

My initial frame of mind when I started this post was based on several sites that I've seen promoting REST-ful apps that in my mind are clearly making "whale-fish" mistakes he talks about.  They simply haven't grasped the underlying architecture of REST enough to distinguish the difference.
 
At the time I was worried that this was a general trend.  However the O'Reily book shows that the leaders of this movement know the limitations and are applying them only to the places that make sense.

I highly recommend getting this book.]




Yesterday I stumbled into what became a vigorous debate through a simple assertion with my friend who is a Ruby programmer:

It is impossible to write a web application without using a stateful connection of some sort.

It's an interesting debate. I think there's probably a difference between what he is calling "applications" and what I'm calling applications, but let's tackle it by looking at a quick history of the web and why REST was so successful for web 1.0.



Pre-REST

Before REST, users had to connect to remote systems through a terminal session or a special client. This was ok, except remote systems could only support a (relatively) small number of connections before they reached capacity. Even worse, the connections were idle most of the time since users are generally slower than computers.

So an interesting idea was to make the systems "stateless" (i.e. sessionless), by allowing servers to handle only one request per connection instead of multiple requests. This solved the problem of idle connections and allowed a single server to handle thousands of requests from separate users. For static[1] web documents this was a brilliant optimization and thus the World Wide Web was born.


Cookies

Of course, almost as quickly as the Web became popular, people at Netscape wanted to build dynamic web documents -- documents that could change their content depending on who was viewing them. Their first web application was an online "shopping cart" that could support:
  1. browsing a catalog
  2. adding an item to a virtual shopping cart
  3. checking out and paying for the item
But they had one small problem: REST.

With REST the user only gets to make one request and each request is completely independent of any other requests the user has made. It's like going to a market where they have magical shopping carts: every time you put something in the cart or try to pay for it, it instantly disappears! Clearly this was unacceptable, so RFC 2109 was born: a method of creating a stateful session using cookies.

However, with cookie-based sessions came performance and scalability issues.  Web applications and infrastructure unavoidably became slower and more complicated than the static web had been.


Secure Sessions

Almost as soon as the first shopping cart web-applications were deployed people also started becoming concerned about the security of their transactions. The next logical step was proposed in RFC 2818. This time the security concerns required a new protocol (SSL/TLS) that was layered underneath HTTP (i.e. HTTP/SSL, or HTTPS for short).

However, this didn't save web developers from the complexity of managing session state, it only exacerbated it. Now we had a stateful application (using cookies) on top of a stateless protocol (HTTP) on top of a completely separate stateful session protocol (SSL).  This added so much complexity to web applications that only the most demanding and wealthy clients (i.e. banking and commerce) can afford to develop and maintain such applications.

Most web architects avoid SSL intentionally, citing poor performance and scalability and complexity of management and deployment as key problems.  


The Search for Simplicity

In the 1990's (the golden age of the early web) we didn't care about such problems, because the technology was new and exciting, the code was small, the problems were mere annoyances... and basically everything worked. However, by the end of the 90's, systems had become complex enough that developers started to rebel.


Web-fundementalism?

One of the directions we rebelled in is a kind of web-fundementalism (a return to basics): all this "state-management" was a bad idea in the first place, we should return to REST-ful principles that worked so well the first time.  But what principles are we talking about?  URL-spaces?  Do people really understand which aspects of REST made the web 1.0 succesful?  

As I asked these questions, I realized that most of the alternative[2] proposals were simply managing session state with different technologies-- instead of using cookies they'd use GET params, GET url-spaces or POST fields; instead of webserver memory, they'd use databases.  They hadn't really changed anything.

Session management by any other name is still session management and is fundementally incompatible with the claims and assumptions of REST, chief among them the idea that such applications are still scalable and can support caching.

Let's explore this idea a little more since there is such resistance to it. Say I have a web application that allows me to rent videos... you might expect such an application to have a REST-ful design with the following types of urls:

http://videostore/signup
http://videostore/customers/larry/rented
http://videostore/customers/larry/cart/pay
http://videostore/customers/larry/cancel_membership
http://videostore/customers/larry/bill/09-17-2008
http://videostore/customers/larry/overdue/Big_Trouble_in_Little_China
http://videostore/videos/Harry_Potter/add_to_cart

On the surface, this looks beautiful... it looks REST-ful. But let's dig into some of those assumptions:

1) Below the pretty urls, is state-management. Suppose the urls are all being served by dynamically from a database. That means the web site has to read my information from the database and display it to me (but no one else) when I access a url below "/customers/larry".  One article I read said the REST-ful answer to this is basic web authentication.  But that isn't REST-ful since it's implemented with cookies!  surprise! your session's back!

2) None of this is even remotely cacheable. When customers cancel or sign-up, the url space changes structure. But how can I cache a url-space that is constantly changing? The simple answer is I can't/shouldn't. The "horribly-complicated-billions-of-wasted-dollars" answer is: sure, you can invest in configurable (or worse yet, "adaptive") caching at multiple levels and spend the rest of your earthly days debugging it. I can hear the sound of a thousand web developers slitting their wrists even thinking about this... gee thanks.

3) One thing I read is that REST-ful web services are easier than XML services to deploy.  It sure looks that way from the service author's point of view initially, but then some critical questions came to mind: how do you know the difference between the verbs and the nouns?  I don't want the user to accidentally cancel their membership by just browsing the site.

Some people say that we should use the other two verbs in HTTP instead (PUT and DELETE). But how do you know that the client has PUT the correct format when you don't have any structural form that can be validated? Does the client just have to guess until they get it right?

I suppose the way out of that problem is to use some fancy AJAX and maybe some JSON object definitions, but now we're headed back into custom serialization territory. Haven't we been here before?  Isn't this why people stopped using SOAP in favor of simple XML and why people stopped using XML in favor of simple JSON?  Sooner or later features get added and simple isn't so simple.  

Einstein had an opinion about simplicity, he said "make things as simple as they need to be, but no simpler".  Do we really need to learn this lesson the hard way by repeatedly wasting billions of dollars in new technology cycles that have essentially been resolving similar problems?


Anyway, because #1 and #2 basically break all the claims that REST makes about scalability and performance and #3 points out that it's not easier, I think that the perceived gains from simply applying a REST-like architecture to web applications is mostly falacious.  Certain kinds of web services, maybe, but never applications.

Fortunately, a handful of other people were already realizing this back in 2004. 



State-management

The other direction we can go--we should go, is to finally accept "stateful" web applications as a given. Google Chrome, Adobe Flex and Microsoft Silverlight are all moving in this direction. REST has it's place, but so do stateful applications. It's time to recognize the architectural pros and cons of each and use the right tool for the right job.

In some sense, this is all obvious -- it's just hard to see it because the layers of technology have always clouded the argument considerably.



[1] It's interesting to note that Fielding, the author of REST originally thought in the context of information and media access-- he did not think of contrasting REST with RPC - a stateful technique. (from http://en.wikipedia.org/wiki/Representational_State_Transfer#Claimed_benefits)

[2] There is
one notable approach that covers a certain subset of web services in a purely REST way. However, this is not an "application" in a traditional sense, it's limited to "lookup"-style services.

There is a very simple litmus test to see if your service is in this subset: Could you also implement the service less efficiently using ONLY static HTML pages? If the answer is yes, then it qualifes.


Thursday, September 11, 2008

lrn2patch: an open letter to Apple and updaters everwhere......

Look guys, I know you're probably busy creating the "next big thing" over there in Cupertino, but can you have one or two guys research the art of deploying software via patch files instead of downloading an entirely new 80MB image every time you deploy?

In fact, let's make this a global rant against the "Check Updates..." installers out there: 
  1. please realize that you are using OUR computers when you design and architect these systems. (i.e. the rude neighbor: "oh I'm sorry, were you using that?")
  2. please realize that other pieces of software may be trying to update at or near the same time. (i.e. the flood: "Me me me..." "Me too.")
  3. please realize that liberal use of "restart" semantics to complete installations may be magnified by the above. (i.e. the tantrum: "stop everything and do what I want... NOW!")
  4. [added 9/25] Please. Please make your updaters run at IDLE priority... there's no reason that a system update should interrupt the user or lock up the system while the update is occuring.
I realize that Windows forces #3 more than most devs can avoid. (mac less so, and linux is almost impervious to restarts from installs).  But aside from those exceptions, you should be treating our computers with respect.  You don't own them or the networks that connect them.  You barely own the licenses of the software on them.

Please take some time to respect your users.  Thanks.

Wednesday, September 3, 2008

google's chrome is hot

This is my first blog posting from google's chrome.  There's been a lot of chatter about whether this is a good idea, so here is my two cents:

Q: Firefox is already a great browser AND it's open source.  Why didn't Google just invest changes into this browser?

A: Chrome isn't simply another Mozilla-fork, it has new low-level system features and architecture.  Firefox is a great browser, and it's pushed Web 2.0 far enough to expose some serious problems in the underlying design of Web 2.0 and the browsers that support it.  Changing Firefox just isn't practical when there are this many changes... it's better to start over.

Q: "Separate Processes" per tab?  Why is that necessary?

A: The browser and web based applications are hostile territory.  You can't trust that applications developers are going to write friendly, clean, compatible code, so don't trust them.  This sounds counter-intuitive until you realize that all the rock-solid architectures do this: IP works because it fundementally assumes that communication failure is NORMAL.  Likewise, modern operating systems believe in process isolation.  It's a "good thing"(tm) that Google is now doing the same for the browser.  

Q: Do we really need another browser?

A: Yes.  Application developers demand more of the web.  We should have a fully integrated debugging stack.  We should have protection against rogue or errant processes.  We should give the user visibility into what happens on their machine.  The current browser architectures address none of this.  The closest extension out there is FireBug (and FireCookie)... but this is only a start.  There are limits to what you can do within a poor javascript environment -- Google knows this because they are also web developers -- our pain has been their pain as well.  So yes, it's absolutely necessary that someone with the resources step up and help solve this mess.

By the way, people have said the same thing about Opera for a long time, but the truth is that Opera pushed both IE and Firefox towards better CSS 2.0 compliance when it beat them badly on the ACID2 tests.   Likewise, there are serious problems with current browser architectures that have not been addressed that Google is now addressing.  

More competition is never a bad thing because it motivates exactly these kinds of changes.

Q: So great, now I have to design for another browser as well?

A: Unfortunately that particular problem will never go away anyway anytime soon-- we already have at least 3 major browsers (IE, Firefox and Safari).  The truth is that a certain segment will always push the latest browsers and gradually the rest will either move towards compatibility or lose market share as those sites become popular.

Q: So why use it before it becomes popular?

A: Stability.  With blogs, online banking, web mail, intranet portals -- it becomes very important that the browser not crash in the middle of edits.   If browsers were stable this wouldn't be nearly as compelling.  We rely on a nightmare of javascript to run our sites now -- shouldn't that be based on a stable VM and multiprocess model in-line with modern operating systems, not 1980's technology?

Wednesday, August 20, 2008

hostile metrics...

While countering a blog post (#25) about men's involuntary glances at women, I raised an equally involuntary tendency of women to "test" men.   

The primary goal of such tests seems to be to present a problem that appears to have a simple logical answer, but in fact has no correct logical answer (i.e. a trap). A simple example would be the notorious question "Does this make me look fat?"

I started thinking about evolutionary advantages and motivations for such behavior...

Assume that reproductive resources are scarce and they have high cost and intrinsic value for both sexes.  Anytime there are scare resources worth having, there are at least a few general success-strategies: fight, lie or wait.  

Let's focus on "lying" as a strategy.  Imagine a selection environment rich with liars who are trying to misrepresent their true value. Women who can't detect the liars from the nonliars are selected against evolutionarily speaking.   So the goal is to find a metric that can distinguish liars from non-liars.  But how could we do this?

We can't test with simple logic, because they'll lie ("Are you a liar?" "No. Of course not.").   But what about attacking the premise of a lie instead?   The general premise of a lie in this context would be to please the prospective mate. 

So here's an interesting idea... frame the concept of a "test" so that any logical answer or pleasing answer by definition fails. Now, apply the "test" to prospective mates and watch the reaction according to the following:

1) If the male responds with logic, there is always a negative answer that will assert he is wrong, i.e. this allows the test to be chained -- continue testing until he doesn't.  

2) If the male ignores the test, he's not interested or not attentive enough.  This lowers confidence -- test again.

3) However, if the male doesn't ignore the test, but responds without logic (non-sequitor, humorous response) in an unexpected but interesting way, this is interesting. He's potentially demonstrated that he is attentive, but is not saying things simply to please his mate.   This raises confidence -- test again.

What's interesting about this idea is that it doesn't define an end.  This is a continual testing process that merely raises or lowers confidence.  It can adapt over time.  Of course, I don't know that women really use this process, but it is an interesting hypothesis.

Tuesday, August 12, 2008

entropy...

The Second Law of Thermodynamics says that entropy always increases. Ordered systems are typically lower entropy (like a clean desk) and classically, they are said to have more available energy... so the 2nd law has always been a little sad.

However, what if there is a loophole? What if we:
1) look at increasingly complex systems of order. initially, these appear random to us, but understanding order in chaotic systems gives us more leverage to
2) do more with less. so we need less energy, thus creating more "headroom" -- does this (relatively speaking) reduce our apparent entropy?

Of course, it still seems like there is a trend of diminishing returns, (this just lengthens the tail in some sense) but now instead of implying a simple "heat death" of the universe, the 2nd law implies the rise of greater and more subtle organization as well.

Interesting.

Monday, August 4, 2008

the return of intrinsic computer languages!

I recently found out about Processing. It's amazing. It's simple. It's portable.

It is the first intrinsic computer language in almost two decades. Languages for some reason have become cumbersome extrinsic behemoths... what do I mean by intrinsic and extrinsic?

Let's say you want to print to the console:
print "hello world";
This is a function that is intrinsic to almost every language in history. It's intrinsic because you didn't have to specify any concept of bindings to an IO channel, unicode character string loading into memory, etc. etc... you just do it.

In contrast, here's an extrinsic example of how to print text to an OpenGL surface. Notice how much boilerplate you have to weed through to finally implement basically two steps.

Now let's look at a similar example done in Processing. Wow. It's so much easier. The code focuses exactly on what you wanted to do... all the other setup and infrastructure has become intrinsic.

Back in the old days, most computer languages were intrinsic. You simply couldn't afford to write millions of lines of code to get something done. Fortran, BASIC, Pascal, Smalltalk... somewhere along the way this viewpoint shifted.

C, C++, and now Java have led an ever increasing charge towards extrinsic declarations. It happened for the best of intentions: we wanted flexibility. C wanted to keep the kernel of intrinsic functions small so that it was portable and powerful. For a time this approach was also simple.

There is nothing simple today.

In the current age we are deluged with extrinsic code. We are drowning in the most convoluted syntax man has ever created. Hibernate, Flex... the list of modern application environments and their dozens of sub-languages requiring explicit coordination has grown out of control.

And none of it is what we really wanted to do in the first place.

It's time to return to intrinsic languages. I'm not advocating a return to simpler non-portable environments that C and it's ilk freed us from. I'm saying the infrastructure has to get smarter -- it has to let us be portable, flexible, free from deployment tyranny, but it also has to be more intrinsic, transparent when we need it to be, opaque when we don't, subtle -- NOT obfuscated!

Processing is the most refreshing step in that direction since LOGO.

Friday, May 16, 2008

web design still sucks: an open letter to the W3C

For many years I've followed the public line of the W3C: you should separate concerns, keep the html close to the data and use CSS for the presentation. Like other web developers, I've struggled between the right way to do things and the way that works (even when the experts don't agree what the right way is-- google CSS 3-column layouts for example). I've cursed browser makers for the seemingly arbitrary rules they use to render margins, box-models and standard controls. And I've struggled to understand the vaguest definitions of resolution and weighting models ever written.

But then I realized the other day, that the W3C simply lacks expertise in presentation-side implementation. At first this might seem like an outrageous claim but consider that there are two other standards that are at least as old as web standards that get this right: OpenGL and Postscript.

Both of these standards have a predictive presentation model -- if you say draw something at a given coordinate, you get something at a given coordinate, on any opengl hardware in existence, on any postscript printer in existence. There's no crying about "relative" units either -- OpenGL units are completely relative, but they are superior to W3C notions of length because they are consistent! It's a far cry from an industry that can't even render their own standards correctly or consistently.

So why can't web standards give us the same predictability? I think it's because the W3C is fundamentally confused about it's purpose and goals. On the one hand they wish to promote the idea of the semantic web (which is primarily a data-centric view), and on the other they want to begrudgingly hand some presentation layer scraps to visual designers who do the most heinous things imaginable to get their work done. I say "begrudgingly" because CSS was only proposed when table-layout threatened to blot out the semantic web forever. I still get the impression that the W3C views presentation concerns as an annoyance that interferes with their data-centric preferences.

I know there is some good work being done in CSS3 and HTML5 that we have yet to feel in the mainstream site design (and even when we do, we'll have to consider the mishmash of currently deployed browsers to make it work correctly), but it's too little, too late.

The W3C's biggest nightmare is already happening, people who care about consistent, quality presentation are turning to rich internet application frameworks such as Flex and Silverlight because it's just so much easier to use. These platforms solve the browser problem by completely bypassing it -- the plug-in suddenly becomes the new browser! They can even back their presentation with xml datasources as well, so other clients can choose to view the pure xml (which is even better than XHTML for semantic purposes since it doesn't need to be cluttered with presentation cruft).

Thursday, May 8, 2008

a philosophical take on quantum realism

I got the latest issue of SEED (#16) and was instantly captivated by the article on p.50 "The Reality Tests". The article goes into one of the more controversial questions of quantum mechanics, the idea that particles might not really exist until they are observed. Many physicists (including Einstein) have disagreed with this idea because it is so alien when compared with our everyday experience (i.e. the moon doesn't cease to exist because we stop looking at it, right?).

But the article explains that new experiments disagree with the idea of "realism", (i.e. that particles have an independent existence before measurement) by more than 80 orders of magnitude. So are we left with the unavoidable conclusion that we can create reality simply by observing it?

I'm not a physicist, but even without physics there are some philosophical problems with such a question.

First, in its most basic form, the explanation that reality is created by observation isn't an explanation. It simply pushes the problem into a different space: "observation" and what it means to be "observed".

I'm not aware of any accepted rigorous definitions of those terms, however the ideas of an observer "creating" reality seem to unavoidably involve free will. After all, it's commonly implied that the observer gets to choose what to observe, right? Except we don't really seem to have conscious control over what gets created "in the large". (For example, I would like to observe a million dollars in my living room right now... nope. didn't work.)

Maybe we don't have conscious control over our observations... but then we have to explain why observers create the same (or even a similar) classical reality that we perceive as coherent when theory says there could be an infinite number of alternate classical realities that are equally possible. Why this particular one? Collective unconscious?

If we disregard a "collective unconscious" of some sort, we are left with something even more dramatic: for some reason, the science of observations collapse on the same reality regardless of all the possibilities -- i.e. you are drawn back to a system of fate without any free will.

But a deterministic system by any other name is still a deterministic system. We started out wanting to say that the particle had no particular existence before observation and instead ended up with a system where observations are as deterministic as the previous definitions of the particles they tried to explain.

So, really, when we're talking about "observer" and "create", we are talking about something very special that doesn't match our everyday meanings. We should figure out what those terms mean before using them in quantum theory.


p.s. there is another wrinkle with "observer" -- quantum physics says you can't observe things without affecting them, so at the simplest level it sounds like observers are just participants in interactions. But interactions affect all matter and energy, so does this mean that everything is an "observer"? If not, what is special about "observers" then?