Friday, October 4, 2013

The twisting path of Obamacare and the Insurance Industry

Nothing ever makes sense in politics until you follow the money...

http://carpenterevert.com/health-care-reform/the-affordable-care-act-will-change-health-care-for-everyone/

"Everyone will benefit from new rights and protections that start January 1, 2014.  These include:

...Insurance company accountability through the 80/20 Rule. The 80/20 Rule requires insurance companies to spend at least 80% of the money they make from premiums on health care instead of administrative costs, salaries, overhead and marketing. If an insurance company doesn’t meet the ratio, you will get a rebate from your premiums."

Is this the real reason for the shutdown?  But why is the 'tea party' conservative core in the House being supported by small businesses against a reduction in costs from such a cap? Instead, why isn't the Insurance industry openly against this?  Actually, it turns out they secretly are...

http://www.huffingtonpost.com/2013/05/15/insurance-industry-obamacare_n_3280405.html
  1. the insurance lobby paid the small business NFIB $850K to protest Obamacare (the very legislation the insurance lobby proposed in order to get the individual mandate passed) and 
  2. the insurance lobby promised to pass on the costs of the rebate to small businesses unless they stopped it.

The ACA never anticipated that the insurance industry would (or could) threaten to take money from small business owners and give it to their employees via rebates and services.  That's ... erm... a very *creative* way of scaring small business owners.

So let's break this down.  If the Affordable Care Act (ACA) forces insurers to spend 80% of their premium on services rather than profit, how exactly is the insurance industry going to pass that cost on to small business?  They'll raise the premiums?  But then they'll have to pay out more rebates or offer more service?!  That doesn't sound like much of a sustainable threat.

The ACA never anticipated that the insurance industry would (or could) threaten to take money from small business owners and give it to their employees via rebates and services.  That's ... erm... a very *creative* way of scaring small business owners.

And why would the insurance lobby pay $850k for this?  Because the anticipated costs of rebates over the next decade are (their estimates): $100 billion.  850k is small change.

So let's do some rough math to put some of these numbers in context...

actual_margin - %20 = $100 billion over 10 years (the amount the insurance industry says they will lose by reducing their margin to %20).

http://biz.yahoo.com/p/431mktu.html

Currently, the Accident and Health Insurance industry has a market cap of $196B.

Now it's hard to know the actual_margin from just the market cap (the market's estimation of value based on rough assets - liabilities disclosure), but we could get in the ballpark by assuming that the market's valuation is close to the actual value, which represents the profit - the costs.  So say the net profit is the margin, i.e. $196B/yr, or $1960B over 10 yrs.

$100b (they amount they say they will lose) / $1960b (the net profit over 10 yrs) = 5.1%

5.1% (percentage loss) + %20 = current percentage margin

So, ballpark, it sounds like the insurance lobby is currently operating at around a %25 margin, which would be reduced to %20 by Jan 2014.

It's hard to know if these estimates are close to truth, and I can understand why the insurance industry doesn't want too much light shed on their practices... maybe this isn't really about the margins, maybe it's more about the regulatory compliance (i.e. who pays the cost of determining whether insurance companies hit the 80% service rate -- if they have to carry this cost, then even more of their margins are being reduced to compensate for compliance checks.)

one last appeal to Washington: let's consider science instead of ideology?

But in the wake of a government shutdown and possible default, I think it's time to shift the conversation in Washington from abstract rhetoric (i.e. "Obamacare will increase costs and ruin the economy") to specific details of how the economy gets wrecked, i.e. where does the money go.

Maybe we don't have time or patience for a detailed candid discussion... in which case, I'll close with one last appeal to Washington: let's consider science instead of ideology?

Both democrats and republicans firmly believe that the other's proposals will wreck the economy at this point.  Non-action will certainly wreck the economy.  Why not let experimental evidence decide instead of ideology? Why don't we stick with the current laws we have that the Supreme Court upheld and see whether or not it damages the economy?

If it does, there should be overwhelming support on all sides for repeal and reform.

If it doesn't, we may have learned a valuable lesson in how to stop special interests from manipulating our ideological differences for their own financial ends.


Friday, May 10, 2013

but... REST is like a protocol isn't it?


Came across one of Roy Fielding's blog posts while researching the latest on how to build REST services/clients and read this:
REST is an architectural style, not a protocol
 So, I get what he's saying... REST doesn't depend on HTTP 1.1 per se, it's a conceptual framework. but REST the conceptual framework itself behaves like a protocol in some key aspects, doesn't it?:
  • REST takes a unlimited space of interaction and constrains it to a handful of verbs.
  • As you walk up the OSI stack, 'verbs' start out very constrained and then expand to anything possible (i.e. compare the number of verbs at the data link layer with verbs expressible in the application layer.)  By this metric (number of verbs), REST feels "lower" than an Application-layer framework.
The argument isn't about lack of verbs... the datalink layer in OSI is fully capable of transmitting any possible state over the wire.  The question is whether Applications consume that representation directly or by conversion to higher-level construct.
  • Applications often "tunnel" their true data and method expressions through REST, like a protocol.  Hence client and server code often (always?) has to box/unbox its application-level semantics from the 'lower level' REST semantics including:
    • exception handling
    • instance vs. collection
    • query vs. control
If so, then REST needs to clearly define how objects at the layer above are framed over a RESTful transport. Restful Objects is one such attempt that is gaining a little traction in that area, but it feels very complex.

In fact, I was in the process of considering writing a Ruby implementation of that spec, when it struck me that the complexity of Restful Objects isn't about the native POJO objects and interactions, it's about boxing/unboxing them into and from the REST layer... 

Well in that context, REST sure smells like a protocol, doesn't it?

Wednesday, March 6, 2013

WaSP's True Legacy: Compliance Testing

The Web Standards Project (WaSP) is closed after 15 years of improving the web.

Their biggest contribution (as they see it) was all the countless hours of outreach, working with vendors and web developers to evangelize the right way to build standards-compliant web sites. Their biggest contribution as I see it was the development of a presentation-layer compliance test (ACID & ACID3).

See, before WaSP's compliance test, browsers had no definitive compliance test for how well they implemented CSS and HTML visually. They could just claim that they supported standards, fudge or ignore the grey areas of the W3C RFCs and everything was "standardized". Except web devs back then had to have an enormous bag of tricks and work arounds to get anything close to a standard visual design implemented. The early web sucked visually. Print designers laughed at the pitiful layout controls and web devs struggled to get even the simplest forms aligned.

Then the WaSP ACID test came out and everyone could instantly see which browsers sucked and which didn't.



Browsers immediately started competing with each other to see who could get the most compliant score.

The ACID3 test up'ed the ante even further:



All of the sudden, the visual web became an entirely different experience for web devs.  Browsers actually supported most if not all of the features of CSS and actually following the W3C standards was no longer a penalty that you swore and grit your teeth through.

But almost none of this was outreach, correcting the "silly errant ways" of web devs.  It had everything to do with compliance tests.  How do I know?  Easy, look at OpenGL and DirectX.  In order to get promoted as a compliant video driver, you have to pass compliance tests that compare graphics on your video card with reference images.  If you don't generate exactly the same images, you fail.  Simple.  If you fail, you aren't allowed to claim to be a OpenGL standards-compliant video driver (or the same from the proprietary Windows Hardware Labs certification).  These certs are big money, because if you aren't certified, no one buys your product.

But browsers have been a different story.  There has been no presentation layer compliance testing in browser standards because the W3C has a very rigid view of what browsers are.  For them, a browser is merely an endpoint in a data stream.  The W3C cares very much about the data stream, which is why the majority of the technologies the W3C talks about are actually data standards that say nothing about how the browser is supposed to render the data it receives.  In fact, the W3C views this as a plus, because browser data should be displayable in a variety of formats (i.e. large text, screen readers, etc.).  In other words they've confused supporting multiple presentation formats with not specifying most of the presentation-layer behavior at all.

That's the real reason the web sucked before WaSP.  The W3C had no concept of presentation layer even though technologies like Postscript had already paved the way for true device-independent layouts that actually looked good!  Instead the W3C looked clunky.  It looked broken, even though the data behind it was excellent.  Even today, the W3C does an excellent job of providing data validation for CSS, HTML, and numerous other data formats it defines.

But WaSP was the first (and the only) group to actually define a reference visual compliance test for what layouts should look like if they followed all the standards.  This effort wasn't completely defined by the W3C -- there was some filling in of grey areas not specified by the standards -- but WaSP led the charge and also led the process of driving that compliance test to acceptance by the industry.

Now browsers could be publically shamed by not supporting the compliance tests and devs could see reference implementations of how the standards were supposed to work.  It was a win-win.

So, now I wonder, as WaSP shuts its doors, who will pick up this charge?  Will it be Ian Hickson (of Google) who worked on the ACID3 test?  Will it be an industry effort run by a non-profit like browserscope.org?  Or will the ACID3 test simply age and lapse into obscurity, once again freeing browser makers from really paying close attention to the semantics of the presentation layer?

I hope someone picks up the torch, otherwise it will be a darker world with WaSP gone.

Tuesday, June 12, 2012

The Commons still open for Tragedy

ok, this interpretation of Ostrom's work made me somewhat mad: http://tierneylab.blogs.nytimes.com/2009/10/15/the-non-tragedy-of-the-commons/
  1. re: overpopulation -- look at the data! nothing since Malthus has disproved his general law of geometric growth of population vs linear growth of resources -- it continues to this day! http://en.wikipedia.org/wiki/Malthusian_catastrophe,
  2. is the NYT really covertly arguing against EPA regulation?  Sure, we have nothing to worry about... these things fix themselves, right?
  3. Yes, Nature *always* fixes itself in the end, so there really is no "tragedy of the commons" from this perspective... but not all of Nature's possible "solutions" need involve the human species -- so we might, just maybe, want to pay a little closer attention to engineering our management of common resources before we make such ridiculous statements.
     
  4. Ostrom was interested in why some approaches worked and others didn't in managing shared commons... centralized government is often not the answer, but neither is letting things simply "take care of themselves".

Sunday, July 17, 2011

Trial and Error and the God Complex...

I just saw Tim Harford's TED talk: Trial and Error and the God Complex.

Harford makes some good arguments for evolutionary, "trial and error" approaches to solving complex problems. There are also significant risks.

Harford's examples are very carefully chosen not to present the limitations of evolutionary approaches to solve problems. For example, a human baby is a product of evolution, but without a specific design -- a product of natural environmental constraint and common decent. Harford's "nozzle" story is different, there is a desired design, which is found through a genetic algorithm... in this case, the principle of evolution is applied in an artificial environment where the constraint is a metric on a fitness function (e.g. how well does the nozzle make detergent?) This allows the genetic algorithm to incrementally improve the design without actually knowing how the design works, by passing traits that contribute to passing the fitness function.

It is helpful to realize that top-down design and genetic algorithm design are simply different ways of finding solutions (optima). One isn't better than the other, any more than analytic solutions to functions are better than numerical solutions, nor are they always equivalently easy or possible to find.

Genetic algorithms only work for designs when a fitness metric can be established (e.g. how well the nozzle makes detergent) -- it is not useful or appropriate as a technique when a fitness function is unknown. Hence, you already have to have expended considerable thought to define a fitness function that correctly specifies the question in order to solve problems the way Harford suggests. This isn't trivial in the general case, it can be harder than solving the problem using top-down design methods.

Harford's example of the stock market is not completely unrealistic -- it is entirely likely that "something" is evolving faster than humans -- but since we have only very vague ideas about what effects a fitness function like "fiduciary trust" has, we don't know exactly what corporations will evolve to claim us as pets.

This brings us to the second shortcoming of evolutionary techniques.... while they are excellent at homing in on optima using a fitness function, the only thing you can say about them is the probability that they are good enough to pass the fitness function and nothing more. If your fitness function forgets to include exceptions, there is no guarantee that your solution will remain stable for such conditions. Even more fundamentally, you have no idea how the solution actually works.

Genetic algorithms can lead scientists to breakthroughs, however in order to understand how they work, the solutions must be reverse-engineered back into an analytic understanding.

Harford focuses so closely on the amazing ability of these systems to find answers that he has forgotten that those answers need rigorous testing. It turns out that humans are even worse at defining correct testing scenarios than we are at building systems top-down. So while his approach may be compelling, it is not without significant danger. It substitutes one evil (i.e. what he calls the "God complex") with another: the hubris to think that we know what we want (i.e. understand the fitness function) sufficiently to know when we get it.

Asimov warned us of this hubris in stories featuring the notorious well-meaning "Laws of Robotics" that led robots to very different conclusions than expected. And Crichton warned us of trying to control such systems as they rapidly change and exceed our ability to control them.

Real evolution is raw, untamed, and unpredictable -- it doesn't play by rules we fully comprehend.

Monday, July 11, 2011

Good Lesson, but not universal...

Aaron Iba explains in his blog that his iPhone game led him to a new way of thinking about development that he thinks makes him a better programmer and startup founder.

"Iterate-and-Repair" is a good optimization strategy where opportunity costs are low (i.e. trying out different solutions in a puzzle game), but they aren't so great where opportunity costs are high (i.e. trying to send a man to the moon). In that case, the best strategy is still to engineer an optimal solution using mathematics.

The key distinction is not learning styles, nor even an MIT background, but rather asking the question: "what is the opportunity cost for my solution?"

The late 90's saw a wave of "good enough" engineering that was based on a similar idea of iterate-and-repair, but notoriously did not factor opportunity cost in the analysis. Who needs all those edge-cases and testing when you can follow the 80-20 pareto rule and get to market quickly? Well, if you've ignored the opportunity costs and your solution fries a thousand people in an aircraft, or tanks a billion dollars in stocks overnight, you better believe that an 80/20 approach is not "good enough" after all.

Of course you should use the right tool for the job, but there is still a need for engineered solutions.

Thursday, February 17, 2011

ruby and json vs xml marshalling

I've been watching a silent war rage between the ruby community with its preference for JSON and RESTful services and the Java community with its preference for XML and SOAP.

The rdoc for Nokogiri snarks at the Java community with a simple quote:

"XML is like violence - if it doesn’t solve your problems, you are not using enough of it."

Like many devs caught in the middle of this war (namely having to integrate Rails apps with Java SOA backends) I'm never quite satisfied with the state of Ruby XML marshaling. Sure there are a dozen gems to address this -- they always vary between giant thousand-object DOMs that leak memory like a sieve, arcane forced DSL syntaxes, or simple to use, but incomplete APIs. So I started to think about hand rolling "yet another XML marshaller" for Ruby.


My first idea was to simplify the XML DOM, so that everything is unmarshaled as an element. Then I can use a simple hierarchal accessor, the "dot" operator on the Ruby object side:




On the face of it, I thought that looked pretty sweet. But then I considered remarshaling this object as XML. Doh! There's no way to tell whether "mike" is an element or an attribute... well I could stick a little metadata on there to remind me later, but it's starting to look like I stumbled into the same path that so many other Ruby gems had trying to solve this problem. And it's not for lack of trying -- Ruby devs have tried many ways to resolve this ambiguity with various results, but not a lot of clear wins -- there's always a "gotcha" in there that makes you wince when you have to handle it. (it's the same wince that Java devs have when they switch axis jars on one server but not another.)


I sat back in disgust and thought to myself,

"wow, this would be so much easier if it was just JSON..."

then it hit me... JSON is easy because round-trip marshaling doesn't introduce any ambiguity about elements vs attributes! Everything is an element!!!

No wonder JSON is so easy!