Friday, October 4, 2013

The twisting path of Obamacare and the Insurance Industry

Nothing ever makes sense in politics until you follow the money...

"Everyone will benefit from new rights and protections that start January 1, 2014.  These include:

...Insurance company accountability through the 80/20 Rule. The 80/20 Rule requires insurance companies to spend at least 80% of the money they make from premiums on health care instead of administrative costs, salaries, overhead and marketing. If an insurance company doesn’t meet the ratio, you will get a rebate from your premiums."

Is this the real reason for the shutdown?  But why is the 'tea party' conservative core in the House being supported by small businesses against a reduction in costs from such a cap? Instead, why isn't the Insurance industry openly against this?  Actually, it turns out they secretly are...
  1. the insurance lobby paid the small business NFIB $850K to protest Obamacare (the very legislation the insurance lobby proposed in order to get the individual mandate passed) and 
  2. the insurance lobby promised to pass on the costs of the rebate to small businesses unless they stopped it.

The ACA never anticipated that the insurance industry would (or could) threaten to take money from small business owners and give it to their employees via rebates and services.  That's ... erm... a very *creative* way of scaring small business owners.

So let's break this down.  If the Affordable Care Act (ACA) forces insurers to spend 80% of their premium on services rather than profit, how exactly is the insurance industry going to pass that cost on to small business?  They'll raise the premiums?  But then they'll have to pay out more rebates or offer more service?!  That doesn't sound like much of a sustainable threat.

The ACA never anticipated that the insurance industry would (or could) threaten to take money from small business owners and give it to their employees via rebates and services.  That's ... erm... a very *creative* way of scaring small business owners.

And why would the insurance lobby pay $850k for this?  Because the anticipated costs of rebates over the next decade are (their estimates): $100 billion.  850k is small change.

So let's do some rough math to put some of these numbers in context...

actual_margin - %20 = $100 billion over 10 years (the amount the insurance industry says they will lose by reducing their margin to %20).

Currently, the Accident and Health Insurance industry has a market cap of $196B.

Now it's hard to know the actual_margin from just the market cap (the market's estimation of value based on rough assets - liabilities disclosure), but we could get in the ballpark by assuming that the market's valuation is close to the actual value, which represents the profit - the costs.  So say the net profit is the margin, i.e. $196B/yr, or $1960B over 10 yrs.

$100b (they amount they say they will lose) / $1960b (the net profit over 10 yrs) = 5.1%

5.1% (percentage loss) + %20 = current percentage margin

So, ballpark, it sounds like the insurance lobby is currently operating at around a %25 margin, which would be reduced to %20 by Jan 2014.

It's hard to know if these estimates are close to truth, and I can understand why the insurance industry doesn't want too much light shed on their practices... maybe this isn't really about the margins, maybe it's more about the regulatory compliance (i.e. who pays the cost of determining whether insurance companies hit the 80% service rate -- if they have to carry this cost, then even more of their margins are being reduced to compensate for compliance checks.)

one last appeal to Washington: let's consider science instead of ideology?

But in the wake of a government shutdown and possible default, I think it's time to shift the conversation in Washington from abstract rhetoric (i.e. "Obamacare will increase costs and ruin the economy") to specific details of how the economy gets wrecked, i.e. where does the money go.

Maybe we don't have time or patience for a detailed candid discussion... in which case, I'll close with one last appeal to Washington: let's consider science instead of ideology?

Both democrats and republicans firmly believe that the other's proposals will wreck the economy at this point.  Non-action will certainly wreck the economy.  Why not let experimental evidence decide instead of ideology? Why don't we stick with the current laws we have that the Supreme Court upheld and see whether or not it damages the economy?

If it does, there should be overwhelming support on all sides for repeal and reform.

If it doesn't, we may have learned a valuable lesson in how to stop special interests from manipulating our ideological differences for their own financial ends.

Friday, May 10, 2013

but... REST is like a protocol isn't it?

Came across one of Roy Fielding's blog posts while researching the latest on how to build REST services/clients and read this:
REST is an architectural style, not a protocol
 So, I get what he's saying... REST doesn't depend on HTTP 1.1 per se, it's a conceptual framework. but REST the conceptual framework itself behaves like a protocol in some key aspects, doesn't it?:
  • REST takes a unlimited space of interaction and constrains it to a handful of verbs.
  • As you walk up the OSI stack, 'verbs' start out very constrained and then expand to anything possible (i.e. compare the number of verbs at the data link layer with verbs expressible in the application layer.)  By this metric (number of verbs), REST feels "lower" than an Application-layer framework.
The argument isn't about lack of verbs... the datalink layer in OSI is fully capable of transmitting any possible state over the wire.  The question is whether Applications consume that representation directly or by conversion to higher-level construct.
  • Applications often "tunnel" their true data and method expressions through REST, like a protocol.  Hence client and server code often (always?) has to box/unbox its application-level semantics from the 'lower level' REST semantics including:
    • exception handling
    • instance vs. collection
    • query vs. control
If so, then REST needs to clearly define how objects at the layer above are framed over a RESTful transport. Restful Objects is one such attempt that is gaining a little traction in that area, but it feels very complex.

In fact, I was in the process of considering writing a Ruby implementation of that spec, when it struck me that the complexity of Restful Objects isn't about the native POJO objects and interactions, it's about boxing/unboxing them into and from the REST layer... 

Well in that context, REST sure smells like a protocol, doesn't it?

Wednesday, March 6, 2013

WaSP's True Legacy: Compliance Testing

The Web Standards Project (WaSP) is closed after 15 years of improving the web.

Their biggest contribution (as they see it) was all the countless hours of outreach, working with vendors and web developers to evangelize the right way to build standards-compliant web sites. Their biggest contribution as I see it was the development of a presentation-layer compliance test (ACID & ACID3).

See, before WaSP's compliance test, browsers had no definitive compliance test for how well they implemented CSS and HTML visually. They could just claim that they supported standards, fudge or ignore the grey areas of the W3C RFCs and everything was "standardized". Except web devs back then had to have an enormous bag of tricks and work arounds to get anything close to a standard visual design implemented. The early web sucked visually. Print designers laughed at the pitiful layout controls and web devs struggled to get even the simplest forms aligned.

Then the WaSP ACID test came out and everyone could instantly see which browsers sucked and which didn't.

Browsers immediately started competing with each other to see who could get the most compliant score.

The ACID3 test up'ed the ante even further:

All of the sudden, the visual web became an entirely different experience for web devs.  Browsers actually supported most if not all of the features of CSS and actually following the W3C standards was no longer a penalty that you swore and grit your teeth through.

But almost none of this was outreach, correcting the "silly errant ways" of web devs.  It had everything to do with compliance tests.  How do I know?  Easy, look at OpenGL and DirectX.  In order to get promoted as a compliant video driver, you have to pass compliance tests that compare graphics on your video card with reference images.  If you don't generate exactly the same images, you fail.  Simple.  If you fail, you aren't allowed to claim to be a OpenGL standards-compliant video driver (or the same from the proprietary Windows Hardware Labs certification).  These certs are big money, because if you aren't certified, no one buys your product.

But browsers have been a different story.  There has been no presentation layer compliance testing in browser standards because the W3C has a very rigid view of what browsers are.  For them, a browser is merely an endpoint in a data stream.  The W3C cares very much about the data stream, which is why the majority of the technologies the W3C talks about are actually data standards that say nothing about how the browser is supposed to render the data it receives.  In fact, the W3C views this as a plus, because browser data should be displayable in a variety of formats (i.e. large text, screen readers, etc.).  In other words they've confused supporting multiple presentation formats with not specifying most of the presentation-layer behavior at all.

That's the real reason the web sucked before WaSP.  The W3C had no concept of presentation layer even though technologies like Postscript had already paved the way for true device-independent layouts that actually looked good!  Instead the W3C looked clunky.  It looked broken, even though the data behind it was excellent.  Even today, the W3C does an excellent job of providing data validation for CSS, HTML, and numerous other data formats it defines.

But WaSP was the first (and the only) group to actually define a reference visual compliance test for what layouts should look like if they followed all the standards.  This effort wasn't completely defined by the W3C -- there was some filling in of grey areas not specified by the standards -- but WaSP led the charge and also led the process of driving that compliance test to acceptance by the industry.

Now browsers could be publically shamed by not supporting the compliance tests and devs could see reference implementations of how the standards were supposed to work.  It was a win-win.

So, now I wonder, as WaSP shuts its doors, who will pick up this charge?  Will it be Ian Hickson (of Google) who worked on the ACID3 test?  Will it be an industry effort run by a non-profit like  Or will the ACID3 test simply age and lapse into obscurity, once again freeing browser makers from really paying close attention to the semantics of the presentation layer?

I hope someone picks up the torch, otherwise it will be a darker world with WaSP gone.