<$BlogRSDUrl$>

Sunday, July 13, 2014

Android device encryption 

The description for Android 3.0 at https://source.android.com/devices/tech/encryption/android_crypto_implementation.html implies that only /data is encrypted. Two questions:

  1. What about / and other filesystems?
  2. Has anything changed with Android 4?

Sunday, April 13, 2014

Capital in the 21st Century 

Universally acknowledged to be An. Important. Book.
Reviewed by Paul Krugman.
Summarized by Matthew Yglesias.
Brad DeLong collected 12 early reviews by economists.
Econospeak has a succinct, balanced description for the politically inclined of Piketty for Dummies

Summary summary: when economic growth slows down, people who own capital still grow in wealth, while people who only produce labor, don't get any richer.   I haven't read the book myself (yet), so I don't know if the author has discovered these two facts:
  • "A rising tide lifts all boats" but  leaky boats don't rise as quickly, and their owners have to spend more time bailing than sailing.
  • The rich get richer faster.  They have access to expensive financial advice, and fancy high yield financial instruments that less wealthy people don't have the entry fees for.  They can afford to participate in higher yielding, higher risk investments because they can purchase complex hedging products that reduce their exposure to potential losses. (Update: Robert Solow recognizes this in his review in The New Republic.)
Typical conservative reaction: "Cool! anyone can become a billionaire!  It's Easy!".   Typical liberal reaction: "We must tax the rich more aggressively!"

What nobody has any ideas how to do: raise the growth rate of global economies, when resources are becoming harder to obtain, and processing them into valuable goods creates pollution, and can be done by robots in any case, i.e. by using capital rather than some wage-producing processes.

Saturday, November 02, 2013

Moore's Law for solar power 

Internet entropy strikes again!  The original version of this important article is gone from the Scientific American website:

• Ramez Naam, The Moore’s Law of solar energyScientific American guest blog, 16 March 2011.

However, even without having to invoke the Wayback Machine, there's a copy at IEET.

Update (9 April 2014):  The Telegraph declares victory. That is, the tipping point where solar power without any subsidies is cheaper than all forms of fossil fuel, has already been passed in 19 global regions, according to Deutsche Bank.

Update 2 (June 2014): The v7 edition of the Lazard Levelized Cost of Energy study, dated August 2013, indicates that by 2015 (next year!) utility-scale solar plants will have a lifetime ROI greater than fossil-fueled plants in 6 of the 10 largest US metropolitan areas.  In the light of this transformation, in late May, Barclays "downgrades the entire electric sector of the U.S. high-grade corporate bond market".


Monday, October 28, 2013

Tradeoffs in Cybersecurity 

The ever-insightful Dan Geer made a very interesting talk at the UNC Charlotte Cyber Security Symposium earlier this month.  He's put the text up on his website.  Anyone who's concerned abut the tension between cybersurveillance and civil liberties should read it and understand it.

His final paragraphs summarize his argument:
The total surveillance strategy is, to my mind, an offensive strategy
used for defensive purposes.  It says “I don’t know what the
opposition is going to try, so everything is forbidden unless we
know it is good.”  In that sense, it is like whitelisting applications.
Taking either the application whitelisting or the total data
surveillance approach is saying “That which is not permitted is
forbidden.”

The essential character of a free society is this: That which is
not forbidden is permitted.  The essential character of an unfree
society is the inverse, that which is not permitted is forbidden.
The U.S. began as a free society without question; the weight of
regulation, whether open or implicit, can only push it toward being
unfree.  Under the pressure to defend against offenders with a
permanent structural advantage, defenders who opt for forbidding
anything that is not expressly permitted are encouraging a computing
environment that does not embody the freedom with which we are
heretofore familiar.
This is the latest corollary of the basic law of strategy attributed to Carl von Clausewitz 195 years ago, that the defender needs to be successful hundreds of times (in cyberwarfare, hundreds of millions of times), while the attacker needs to be successful only once.  In order to be totally effective at defense, one must have totalitarian control over the environment and all the actors within it.

Or, as Benjamin Franklin put it 250 years ago:
They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.

Tuesday, August 06, 2013

Red Plenty -- A dream that failed 

I don't know why there was so much puzzlement about this book when it came out. It's a historical novel, albeit a thoroughly documented one, with 70 pages of notes and references. It's about a period a long time ago and very far away now, the Soviet Union of the 1950s and '60s. It's also a novel of ideas, and the idea is that centrally planned economies can produce a material utopia in which everyone works at what they are good at, and everyone receives everything that they need, without the soul-destroying, self-destructive overshoots of capitalism.

I was in elementary school and high school during this period, and was made to read FBI Director J. Edgar Hoover's book "Masters of Deceit" in order to know how evil communists were. I seemed to be one of the few students who realized that we were being fed propaganda, but I didn't have access to Marx's Capital or even the Communist Manifesto, so I was simply left with the impression that we were supposed to be opposed to commies simply because they were the bad guys, in the same way that the Aggies were the bad guys if you were a UT or OU football fan. This kind of an opposition didn't seem worth destroying the world in a nuclear holocaust for.

In Red Plenty, the characters are simply trying to get by in a rickety, inflexible economy that doesn't really respond to their needs, just like wage-earners in the US. But they have as a secret advantage in the person of Leonid Vitalevich Kantorovich, a real person and a genuine genius, who invented the mathematical method of optimization called linear programming, at about the same time that the American George Dantzig did. Both Dantzig and Kantorovich received the Nobel Prize in Economics for their achievements.

It was linear programming that Soviet economists hoped would push their economic system over the threshold from failure to success, by allowing the myriad of dependencies between all the supplies needed to produce a washing machine or an overcoat or a limousine or a long-range bomber to be identified and coordinated so that every component part was produced in just the right amount without surpluses or shortages that would prevent each end product from being pulled together in just the right quantity needed. In capitalist economies, the coordination problem is solved by a myriad of free markets, but markets lead to profits and profits lead to capitalists, and in the Soviet Union, that could not be allowed. So marketless central economic planning had to be made to work, regardless of the consequences. And the consequences were severe, and even then, economic planning failed.

Red Plenty is a much richer story than can be captured in a few paragraphs - it deserves extended analysis by many authors. And it has received these analyses in an internet symposium by a distinguished team of high-end bloggers organized by the "editors" of Crooked Timber. In addition to some of the usual Crooked Timber contributors such as John Holbo and John Quiggin, it contains essays by economists Brad DeLong and Cosma Shalizi, and writers Ken MacLeod and Kim Stanley Robinson, among 14 others. This amounts to an open review journal study, since in addition to the symposium essays, as a blog each essay was allowed to have comments from any reader who cared to reply. Some of the comments are extremely insightful, though others are extremist, intolerant, and/or uninformed, as blog comments will be.

The highlight of the symposium for me is the essay by Shalizi, who uses computational complexity theory to explain that the planning process itself is simply too big and time-consuming to work in a real economy, even using today's hyperscale computers that are millions or billions of times more powerful than the BESM-6 mainframes available to Kontorovich and his colleagues.

Of course, economic interactions are often nonlinear and would have to be addressed by a generalization of linear programming rather than directly with LP, but while the complexity of mathematical optimization is robust to some kinds of nonlinearities ("convex functions"), many of the nonlinearities in real economies are concave, and the known algorithms for optimizing them are much slower (exponentially slower) than those for linear or convex functions.

Worse than that, in economies that are not planned, such as market economies, it is impossible to predict their future behavior. The reason for this follows from the diagonalization argument used by Goedel and Turing in their proofs of incompleteness and undecidability. As soon as you make a valid prediction about an economy, someone can take that prediction and use it to arbitrage the markets that are predicted, and the money that is involved in exploiting those predictions will affect the market itself, and thus invalidate the predictions. There is quite a bit of interesting research to be done in describing the magnitude of those invalidation effects.

It's not clear that capitalism can be saved from recurring, unpredictable disasters, but we know that the Soviet Union could not be saved. Red Plenty may change your view of that collapse from a triumph of capitalism to that of a tragedy of socialism.

Thursday, August 01, 2013

Visualizing privacy breaches 

David McCandless has created a website titled Information is Beautiful, to help promote better visualizations of information.  One of the pages that his team has created is titled "Worlds biggest data breaches" but lists "selected" incidents, so you can't be sure that it's representative.  These are really privacy breaches involving only personally identifiable information, so the 700,000 secret documents obtained by Bradley Manning and distributed by WikiLeaks aren't shown. Additionally, Information is Beautiful is a visualization team, not a security team, so the page may not be updated with new data in the future.

Nevertheless, the graphic has some quite cool interactive features, like the ability to filter by Method of Leak. This is very useful to a security manager who's trying to decide which kind of breach to focus preventive resources on.  If you can't address all possible risks, you should focus on the ones that are most likely to cause significant losses.  The biggest one is "hacked", but this usually means that the organization's admins were sloppy and didn't follow the security team's directives, thus letting the hackers in.   Breaches of organizations that are doing everything right, are actually quite rare.

Edward Snowden & the NSA 

More noise than signal in the punditocracy, but a few insightful analyses can be found:

I know that there's at least one more analysis of this kind depth out there somewhere, but I can't find it now.


Wednesday, July 31, 2013

Security vulnerabilities in life-critical software 

Nick Schetko has a very nice overview of the problem in the financial website Minyanville, of all places, titled "Pacemakers, Cars, Energy Grids: The Tech That Should Not Be Hackable, Is".   The article mentions air traffic control software and the new generation of vulnerabilities to GPS jamming and spoofing, but doesn't mention aircraft flight control software itself, the stuff that allows "fly by wire" piloting, the scary insecurity of medical care systems such as radiation therapy systems, ICU monitors and drug prescribing and delivery software, and industrial process control systems.   If chemical companies aren't careful, facilities can become weapons of mass destruction instead of mere tragic accidents such as Bhopal and Seveso.

The forgotten history of the other internet 

Nice article from Andrew Russell writing in IEEE Spectrum, "OSI: The Internet That Wasn’t.
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking."  You can see some of the reasons for the success of TCP/IP in the article, but the most succinct summary to me remains the slogan "rough consensus and working code".   This principle guarantees the triumph of useful technology over the politico-bureaucratic warfare that too often characterizes processes like ISO standards-making.

Monday, March 25, 2013

It's addictive, why isn't it illegal? 

Modern processed food products, that is.  I just came across a wonderful NY Times article, The Extraordinary Science of Addictive Junk Food, which explains how food manufacturers have tuned their products to be so compelling.   I happen to be one of the victims.  While I can stay away from most processed food products, put me near a bag of anything from the "salty snacks" grocery aisle, and I can't stop until the bag is empty.   Far more insidious than irukandji box jellyfish venom, these products have identified a bug in the biological program that manages human survival that is just as remarkable in the way that they evade the normal defense mechanisms.

Sunday, January 06, 2013

Antifragility: Luck favors the prepared mind 

Nassim Nicholas Taleb gives away the store in this excerpt from his new book at John Brockman's Edge blog.  In an excerpt titled Understanding is a Poor Substitute for Convexity (Antifragility), he lists 7 rules for building a system that can take advantage of black swan events.  Here are the rules:

  1. Convexity is easier to attain than knowledge (in the technical jargon, the "long-gamma" property)
  2. A "1/N" strategy is almost always best with convex strategies (the dispersion property)
  3. Serial optionality (the cliquet property)
  4. Nonnarrative Research (the optionality property)
  5. Theory is born from (convex) practice more often than the reverse (the nonteleological property)
  6. Premium for simplicity (the less-is-more property)
  7. Better cataloguing of negative results (the via negativa property)
Property 2 is the real secret.  Convexity means that given an equal number of wins and losses, the total winnings will exceed the total losses.  Structure the game like this, and you'll go home wealthy.

In information security, this implies that the goal should be to ensure that catastrophic breaches are structurally impossible.  Once you've assured this, you can drive the ongoing the ankle-biter attacks down to a dull roar level that can be tolerated indefinitely.  Network engineers avoid catastrophic failure with techniques like carrier diversity.  Unfortunately, they still mostly allow the Cisco vendor monopoly to continue.

System architects in other IT areas haven't learned this lesson, either.  They still allow their entire enterprises to be dependent on "monocultures" of products from SAP and Peoplesoft and the like.   Here the herd instinct predominates.  "Nobody ever got fired for buying from IBM" -- if IBM, or ADP, or salesforce.com goes down, everyone else goes down too, and you can't be especially blamed.  But if you had diversified, then your enterprise wouldn't have gone totally down, and you would be positioned to step in while your competitors were struggling or failing, and win big.




Wednesday, December 26, 2012

"Occupy Wall Street" - a pretend revolution 

Best analysis yet: The Baffler, which appears to be a left-wing magazine reincarnated as a "little magazine" published on paper by the MIT Press and online as well, has an insightful analysis titled "To the Precinct Station: How theory met practice …and drove it absolutely crazy" which describes how the OWS "movement" was co-opted by academics who were more interested in "theory" and "community building" than they were in creating an effective, lasting organization that could accomplish the hard work of doing what needed to be done.  In the end, participants in the OWS campout were just in Zuccotti Park for the carnival, fooling themselves about changing the world just as much as conservatives were fooling themselves about the state of the voters in the recent election.

Sunday, October 21, 2012

Top 3 reasons why conservatives hate conservation 

I've been puzzled for a long time why political conservatives are almost universally opposed to any measures to preserve and protect the natural world.  You'd think that the conservative desire to keep things as they are would extend to keeping the great mountains, forests, plains, rivers, and deserts that North America has been blessed with in a pristine state, but it doesn't seem to work that way.   What's really going on?

  1. Inability to distinguish dominion from destruction.  In the Biblical story of creation, when God cast Adam and Eve out of the Garden of Eden, he gave them authority over all the living things of the earth.  Six thousand Biblical years ago, people didn't have the ability to do much more than protect their villages and livestock from large predators such as lions, wolves, and eagles with spears and arrows.  Now we have barbed wire fences that span continents, massive farm tractors that can do the work of 500 horses, and heavy earth-moving equipment that we use to literally move mountains in order to obtain the coal within them.  We can change entire ecosystems, and we do.  Conservatives need to take their Lord's injunction far more seriously, and cast off childish attitudes that they are helpless against the might of natural forces.   If I were God, I'd be asking "What have you done to my garden?  You have killed thousands of kinds of animals that I took care of myself because Noah only had only one Ark, and you have turned vast regions into lifeless deserts, and you are planning to do even more.  I have sent many prophets such as John Muir, but you have not listened.  Woe be unto you!"
  2. Viewing all of nature as a store of resources to be exploited.  Just like "the only good Indian is a dead Indian", the only good land is land that can be farmed or mined, preferably both.  But mining takes preference, regardless of its destruction of agricultural capability.  When I was young, my friends would go swimming in the nearby "stip pits" that had filled with water after they had been mined and abandoned without even restoring the topography back to its original gentle hills.  Before the EPA and related legislation required mining companies to replace their tailings, you could drive on US Highway 40 for a hundred miles through Indiana and Ohio -- the best farmland in the world -- and see nothing but hundred-foot-high ridges of strip mining spoils, with the occasional giant excavator showing its masts above them.  But before it was farmland, those Midwestern plains were tallgrass prairie harboring hundreds or thousands of different species of grasses and insects.  Now that land is planted with genetically modified corn, soybeans, and wheat that is poisonous to insects, and cultivated with "no-till" methods that save fuel used for plowing by saturating the soil with herbicides, so that broadleaf weeds and prairie grasses cannot survive.  The result is mile after mile of a single-species landscape that is held hostage to the patent-protected seed stocks of Monsanto and Pioneer Hi-Bred, and can be catastrophically wiped out by unplanned weather conditions or invasive, pesticide-resistant fungi or caterpillars.   To the conservative, this is good, because it allows those companies to extract higher profit margins today by deferring the cost of damage to future generations.
  3. Nature is the ultimate outgroup. Conservatives are an exclusionist movement.  They want everyone to think like them, and they spend a lot of time arguing about who is a true believer and who is, for example a "Republican in Name Only" and attempting to expel them from their group.  One suspects that if the technique hadn't been invented by the Chinese Communists, that they would be using "self criticism meetings" in order to shape behavior.  Religious groups with their affirmations of commitment serve a similar function in "separating the sheep from the goats", and driving all differences towards the core beliefs, regardless of merit.   Nature, of course, was there first, and it cannot be controlled, directed or shaped.  Whatever your religious or political doctrine, nature will not follow it.  This must not be allowed.  To the conservative ideologue, untamed nature cannot be permitted to have any legitimate status in the community.
Liberal conservationism has its problems with preserving the natural world as well, notably the notion of a "natural state" that can be defined and preserved in stasis forever, notwithstanding that it was created by billions of years of perpetual change, the idea of the "noble savage" untouched by civilization who must be kept ignorant and deprived of its benefits in health and comfort, and the notion of vountary poverty that would save the world if only everyone would give up lighting, heating and air conditioning, and travel.  But that's a different discussion.

Friday, September 07, 2012

DE4 component submodels 

A fragmentary note on a bit of structure transcribed from some scribbling on the whiteboard in my office - food for thought:

Ecological
Environmental
Energy
Economic

Saturday, July 07, 2012

The next 236 years 

Eric Roston at Bloomberg.com asks "Can the U.S. Economy Be Sustained for Another 236 Years?" with a predictably unsatisfactory answer.

I'm sure that in 1888, when the U.S. Census declared the American Frontier to be closed, and there was no more "unoccupied" land left to be taken by the white man, and the US was still in the chaos of Reconstruction from one of the worst civil wars in history, making what's going on in the Middle East now seem like child's play, people were reasonably asking whether the country could survive another 112 years like those that had occurred since 1776.

And the the millennium occurred and those 112 years had been survived with substantial success. The U.S. economy in 2012 with air conditioning, jet airliners, internet video, and electric automobiles, not to mention hedge funds and risk arbitrage, is very different from "civilization as we know it" in 1888.

I have no doubt that the US and its economy will be as different in the year 2248 as an economy of 50 states is different from that of 13 English colonies. There is no doubt that many politicians will continue to be venal, corrupt hacks, as they have been for the past 236 years, but they will probably still have been elected by a majority of voting citizens who will get what they asked for.

Saturday, June 30, 2012

Leaderless movements 

Hugo Dixon of Reuters, in a "Commentary" article there, tries to explain how The Revolution Will Be Organized. The title could be a play on Gil Scott-Heron's classic The Revolution Will Not Be Televised.

I have a brief counter-commentary -- They're both wrong:

"Meet the new boss, same as the old boss." The author and his academic sources don't seem to notice the contradiction in what they're saying. That is, that democratic movements can't succeed unless they are undemocratically organized with a dictatorial head or junta to "knock heads together and get everybody to stick to a plan." Karl Marx believed that there would be a "dictatorship of the proletariat" which would fade away to produce true communism. The Soviet Union's dictatorship did indeed fade away, but it was followed by the pseudo-democratic autocracy of Vladimir Putin, not by communism. The Romans tried electing their "dictator" who would voluntarily step down after the wars were over, but that didn't last long, ending when Julius decided to call himself Caesar and become emperor rather than step down. It's not yet politically or academically respectable to say that all forms of government evolve to become dictatorships or monarchies, so we end up with incoherent articles like this one.

Face it, democracy is hard. It requires the people to elect representatives, not leaders. It requires the people to communicate thoughtfully with those representatives, and the representatives to reasonably and thoughtfully work with each other on common problems. When major political movements are based on the premise that negative campaign ads work better than constructive discussion, that cooperation is evil and that members of other political parties are traitors, democracy will continue to deteriorate.

Social media have the opportunity to bypass power-hungry leaders and allow the people to communicate directly with each other, making it possible for leaderless democratic movements to react and refocus more efficiently and rapidly than ever before, but their technical architecture with centralized software and servers makes them just as corruptible as the old fashioned political machines that used smoke-filled back rooms instead of giant server farms.

Saturday, April 21, 2012

Why government subsidies don't work 

Hysteresis.  The Economist's Free Exchange blogger who gives only his initials R.A., makes an insightful comment in a post about subsidies for electric cars.
The tried and true aphorism [is] that government isn't any good at picking winners. This isn't, by the way, a knock on government. No one is particularly good at picking winners. The problem for government is that while market-produced losers usually fail and go away, making room for winners, government-produced losers tend to stick around for a while, sucking resources away from potential winners. No one knows in advance whether something will work; government's failure is in its relative unwillingness to clear away the chaff.
In economist-speak, the subsidies that Free Exchange describes are "Pigovian subsidies", the converse of the more well-known Pigovian taxes. The reason that this kind of tax works better than subsidies is that the lifecycle of an enterprise is asymmetrical -- the growth phase is much shorter than the decline phase, so the cumulative penalty of a tax during growth is less than the drain during decline, putting failing enterprises out of their misery earlier, while the cumulative effect of subsidies is reversed, having a smaller effect during growth while prolonging declines. If a government creates equal numbers of subsidies and taxes with equal rates, the total effect will be negative.

Friday, March 30, 2012

Security depends on quality 

Are we becoming more tolerant of quality problems?  Or have our systems become so big and complex that quality is an impossible goal?  How can you have solid security if the systems that your're trying to secure don't even perform their primary functions in a high-quality, reliable way?

Computers and robots don't make mistakes, right?  But there's a complementary saying: "To err is human, to really screw things up requires a computer."  After all, the things were designed by error-prone humans.

Fast food restaurants have achieved much of their success by creating a product that can be considered "high quality" in that it is identical each and every time you go to one no matter where in the country, and almost in the world it is.  Yet on my last visit to one of the top 3 franchises, they got 3 of 5 items in my order wrong, and while I was there they made errors in the orders for two more customers.

One of the reasons that Apple is such a powerhouse is that they have achieved a higher level of quality than their competitors can ever aspire to, even a decade or more after the infamous "blue screen of death" was common.

But a monolithic ecosystem of total control is not the only path to quality.  Most of the web runs on the Linux OS and Apache web server, which are both completely cooperative, transparent, loosely coordinated enterprises, and achieves higher quality than its closed, commercial competitors.

Engineers have another slogan: "we can build it for you fast, cheap, or good.  Choose any two."  You could survey people asking whether they've become used to fast and cheap, and "good" has become impossible.

Luxury is a surface characteristic any more.  The luxury smartphone doesn't have any better software or give you any better sound than the iPhone that millions of people carry, though it may come in a gold-plated case.  Even Bill Gates runs Microsoft Windows on his PC.

My taxes were almost lost when my tax peparer's PC crashed at the end of tediously entering all the data.  He wasn't sure that it had made a backup for him; he hadn't bothered to check that any kinds of backups were ever made.

My Toyota Prius has the same navigation software as a Lexus, although the Lexus has a somewhat bigger screen.  Both systems have the same bugs and usability problems.

Toyota has a well-deserved reputation for quality, but they can't deliver an operator's manual that correctly describes how the hand controls relate to the headlight settings.  As I was sitting with the "finance manager" at the dealer completing my purchase, his computer crashed and was unable to print some of the government forms - we had to move to another office to finish all the paperwork.

Now, what might happen to General Motors dealers' ability close a sale with correct pricing and product option information when their new Chief Information Officer is known for decimating the Information Technology Department at his previous job, firing all the high-salary, experienced veterans and replacing them with low-wage workers offshore?  How can quality be maintained in a regime whose goal is rapid delivery at unprecedentedly low cost?

Quality issues have been an issue with systems since before Capt. Grace Murray Hopper found the first insect in the backplane of a mainframe.  In my job I'm dependent on people doing high quality system design and operating those systems reliably while making changes to them, effectively rebuilding the metaphorical airplane while in flight. 

Today it seemed that all my problems were quality-related.  To top it off, when I went to my e-book reader's online store to buy "The Checklist Manifesto" for some weekend reading, the shopping cart function wouldn't work.  Argh.

Saturday, February 25, 2012

Making the victim pay for your negligence 

Monsanto has come up with a devilishly clever new way to be a patent troll.  Allow your patented software (DNA sequences that code for herbicide resistance in plants, in their case) to become inadvertantly incorporated into someone else's product, then suing them for patent infringement.  When that product is organically grown food, the infringement actually reduces the value of the accused infringer's product.  A group of organic farmers in California is suing for a "declaratory judgement" that these threats from Monsanto are illegal.

This same technique could be used by owners of patented computer algorithms -- let your algorithm escape from your licensing control and become incorporated into a computer virus or worm, then demand royalties from the owners of the infected computers or sue them for infringement.

I have no idea whether this generalization of Monsanto's trolling method is patentable as a business process.  Nor have I read the filings in the farmers' lawsuit, so I don't know whether they contain any hints that any participants in the case are aware of the generalizability of the method.   But if they aren't, here's a statement that I believe this is an obvious generalization, and its obviousness should be grounds for invalidation of any attempt to patent the trolling method.

Wednesday, February 22, 2012

Tunnel Vision 

Actually, more like macular degeneration, where foveal vision is impaired, rather than the tunnel vision that sufferers from glaucoma must live with.  Francis Fukuyama has A Conversation With Peter Thiel, where they start out by discussing blind spots in the political views of both the left (income inequality) and the right (government inefficiency).   Their discussion quickly takes a much more interesting turn, towards "their common blind spot, which we’re less likely to discuss as a society: technological deceleration and the question of whether we’re still living in a technologically advancing society at all. I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. "

It's easy for a biologist to imagine that social and economic progress follows a logistic curve that starts out exponential but flattens off as resource limits are approached.  It's also the case that after the easy problems are solved, the remaining ones become exponentially more difficult, producing the same slowdowns, though without the hard upper bound.   The social difficulty is that, as Thiel observes, our political systems are built on the promise of never-ending growth.   You can't get re-elected by promising that there are not going to be any more free chickens for every pot.

More compute power is an essential prerequisite for getting out of this trap.  With enough bandwidth and large enough displays and enough compute power to drive them, "as good as being there" can become a reality and the limits to material resources and the costs of transporting those resources can become effectively non-issues.    However, better, more reliable, more trustworthy software is also a prerequisite.  The complexity mountain is a problem for software, too.

Wednesday, October 12, 2011

Most realistic sci-fi film scene ever 

Watching a rerun on TV and once again marveling at Dr. Floyd's speech at Clavius in 2001: A Space Odyssey.   Classic, timeless bureaucractic detachment.  It takes a director with Stanley Kubrick's genius to put something so monstrously boring on the screen.


Sunday, August 21, 2011

The Law of Political Necessity 

Saw this somewhere, not sure where, but it explains a huge amount of nonsensical political behavior and budget bloat.  Given an arbitrary crisis:

See also, "security theater"


Friday, July 08, 2011

Who won the space race? 

Houston, We Have a Problem - By Joshua E. Keating | Foreign Policy suggests that Russia "... appears to have prevailed." But what really catches my attention is the size of the budget numbers. About $4 billion each for Russia and Europe, about $1.5 billion each for China and India. Even the private arm of NASA is only about $6 billion. One of the comments mentions a US military budget of $640 billion. In the large company that I work for, it takes a billion dollars just to get an appointment with an Executive VP. As critical as it is to the national imagination, space is really just a hobby for everyone.

Wednesday, June 29, 2011

Whatever happened to stability analysis? 

Alejandro Nadal has a very interesting post titled Whatever happened to stability analysis? which points up the limitations of economic theory. "Stability is one of the most important aspects of neoclassical theory because it addresses the question of just how the mechanism of free competition in the marketplace actually leads to the formation of equilibrium prices. ... Maintaining ignorance about the limitations of stability theory comes in handy when perpetuating the mythology of market theory. As Mundell once remarked, stability analysis is the most successful failure of general economic theory. It is also the best example of how an academic community pushes the most serious problems of mainstream theory under the rug and gets away with it. Students should learn to look under the rug. "

Nadal's article includes references to online copies of key papers in the development of stability theory, but access to nearly every one of them is blocked to anyone who doesn't have an academic affiliation or is willing to pay more than $30 for a 37-year-old paper that probably cost $0.50 to scan.


Monday, May 30, 2011

Loss Equilibrium 

The fundamental question of security management is "how much should I spend on security?"  There are several approaches to this.

There's the paranoid approach: "however much security you have, it's not enough"  This is encouraged by vendors of security products and services, who want you to buy, buy, buy, and don't care if you're spending your money effectively.   It's functionally equivalent to the "priceless assets" approach: "if your assets are infinitely valuable, anything less than an infinite amount of spending on security is inadequate."  This approach is deeply baked into the security industry due to its origins in military security, where the asset value is the entire country.

There's the auditor's approach: "for every vulnerability, a control".  It assumes that controls are 100% effective, and that breaches can be identified and rolled back if detected.  This also creates an ongoing market for security products, since in a system with human components and with computationally universal inputs, that is one that allows documents with macros, Javascript, and active plugins, not to mention stack overflows and command injection vulnerabilities, there is an infinite supply of vulnerabilities to be protected by pattern-matching & blocking technologies.

Then there's the loss-management approach.  This is based on the notion that losses can be predicted, and controls can be assessed for their effectiveness in mitigating those losses.  This is the only approach that that provides a principled basis for a budget less than "all the money you have".  But how do you manage effectiveness in a principled way, when vendors are motivated to tell you "trust me, it really works great!" and hide any weakenesses that their product or service may have until it's too late for you.  Third party certifications such as Common Criteria protection profiles ensure a baseline of effectiveness, but the CC certification hierarchy doesn't distinguish levels of effectiveness - the distinguishes trustworthiness of achievement of the baseline.  A product certified at EAL4 may may be no more effective than one certified at EAL2.

Assessment of effectriveness is problematic prospectively, but it can be assessed retrospectively: simply add up the losses actually experienced with a given configuration of controls.  That is, if you are unable to develop a credible estimate of annual loss expectancy, use historical data for measured annual losses.  That is, ALE = MAL.

Now apply the principle of not spending more than the value of the asset to your annual budget.  You have observed MAL, so you can say the annual security expenses shouldn't exceed that value.  SE =< MAL.

In an environment where threats & assets cannot be effectively and reliably estimated, security expenses will approach an equilibrium with security losses.  This is not good news for participants with assets that are protected by the laws of macroeconomics, such as consumers in a free-market economy whose personally identifiable information is somewhere out there in the cloud.

Saturday, May 28, 2011

Buffalo 

Thanks to a commenter on Brad Delong's blog. Any sequence of the word "buffalo" of length n>1 is a grammatical sentence of English.   I am obligated as a former Oklahoman, where the buffalo still roam in a few places, to point out that they are actually American Bison. 

Wednesday, May 18, 2011

The Paranoid Style in American Politics 

The Paranoid Style in American Politics
By Richard Hofstadter
Harper’s Magazine, November 1964, pp. 77-86.

Hofstadter was a famous professor of political science at the school where I was an undergraduate, though I never took any of his courses.  This article is one of the reasons for his fame.  The paranoid style is evident to any careful observer of politics, but this puts it in a broader context.   No you're not imagining it, they really are crazy, and they've been that way for a long time.;

Friday, April 22, 2011

Ballistic Risk Management 

Also known as "manage first, assess someday".   Compliance-based paradigms do this -- they just make you do whatever their requirements are, regardless of the actual threats or any unique vulnerabilities or immunities your systems may have.  So do best-practice or good-practice risk management frameworks.  If you do what everybody else is doing, you don't have to look at your own threat or vulnerability environment.

Everyone wants to be special.  Except when being special means you might have to do more work to account for those special characteristics.   Then you're just like everyone else, right?

Sunday, March 27, 2011

Mechanical Universal Turing Machine at last! 

I don't think this guy Jim in Lancashire fully understands the magnitude of his accomplishment.   As far as I know, nobody has ever built a finished, functioning, completely mechanical Universal Turing Machine before.  The math and the plans are part of every computer science textbook, and there have been a number of more or less mechanical Turing Machines built, often cheating by using a microprocessor or other electronic components, but they've always been basic TMs, without the programmability that makes a true computer.   And for extra coolness, he created the state transition table using CNC at the Manchester Fab Lab.  

Not to mention his fully fabbed Rule 110 cellular automaton, with a few parts missing, oops.  Rule 110 CA's are also universal, with a nice scandal to go along with their discovery.

http://srimech.blogspot.com/search/label/turingmachine

Reminds me of one of those naive geniuses that pop up regularly in the SF literature.

Monday, February 21, 2011

The road to sustainability: finding it with DE4 models 

DE4 models are a new paradigm in sustainability modeling, they try to put everything together all at once. DE4 stands for Dynamic Energy, Economic, Ecological, and Environmental. The 4 should really be a superscript, we don't always have that typographical convenience available.

The overall research program is to view sustainability as a giant problem in nonlinear programming. Nonlinear programming is of course the hard version of linear programming, it's a mathematical method for finding a goal, called the "objective function" given a set of constraints. "Objective" in this usage is not in contrast with "subjective", but is used in the sense of "where we're trying to get to". It's a noun, not an an adjective.

The objective function we're interested in achieving is this:
  • sustainable in the sense of lasting at least as long into the future as civilization has extended into the past, some 3000-5000 years.
  • stopping the decline in biological species diversity. This can occur either by forestalling the extinction of existing species, or by increasing the rate of appearance of new ones. Currently we're out of equilibrium by at least 10,000 to one.
  • Stopping the increase of carbon dioxide in the atmosphere, and secondarily other pollutants itself. The days of "the solution to pollution is dilution" are long gone. Mark Z. Jacobson's GATOR model is an example of the state of the art in this area.
  • Transformation of the global energy economy to sustainable sources. Jacobson and Mark Delucci have concluded that it is technically feasible to transform global energy sources to wind, water and solar within 20-40 years. They are of course wildly optimistic since neither the political will nor the economic resources are available.
  • Thus integrating economic models into this transformation is necessary. Things are unlikely to change in ways that are unprofitable; causing economies to collapse by raising taxes to unsustainable levels in order to fund energy projects doesn't do anyone any good. Cyclic booms and collapses don't count as "sustainable" even if their long-term average is nonzero.
  • We suspect that it will somehow be necessary to decouple the material economy from the nonmaterial value chain. Many material resources are bounded, but billionaires cannot personally consume all their wealth; it's just places they don't have time to go to and money they don't have time to spend. We would like to know whether a level of health and comfort equivalent to a first-world country in the year 2000 can be achievable for most everyone in the world using market economies.
Clearly, a brute-force search through this kind of parameter space is computationally unfeasible. So are bottom-up detailed models such as GATOR. We will need to use top-down approaches based on high level theories and summary data such as that found in David MacKay's "Without Hot Air" studies.

Lastly, we don't have time or resources to track all the trends in scientific data management consortia or modeling environments -- that path easily leads to unproductive thrashing. We'll use SciPy and JSON, and see where that takes us.

dead media: the uncensored internet 

Stealing Bruce Sterling's journalistic beat for a moment -- noticing once again how the walled gardens seem to be winning. Not only the national enclaves in the mideast, that simply shutdown the state-run ISP if the rulers don't have the sophistication shown by the Great Firewall of China (Arbor Networks has a neat chart at http://asert.arbornetworks.com/2011/02/middle-east-internet-scorecard-february-12-%E2%80%93-20/), anonymizing proxies notwithstanding, but the corporate enclaves that were always in the dumb cellphones now expanding to smartphones with Apple not allowing any apps that don't use Apple's subscription service and other appstore censoring incidents. Not to mention the RIAA lawsuits, the demise of Pirate Bay, and the net neutrality debate. Whatever happened to Gilmore's Law? Gilmore himself seems to be on both sides of the fence, see http://www.newswireless.net/index.cfm/article/8811

Monday, December 20, 2010

Bozo the Clown's telephone number 

For some unaccountable reason, while I was reading the comment thread for Jeff Masters' blog at the Weather Undergound, this number came to me:

Fiddledeedee 555-5555-55555-5552. That's fifteen 5's and a two.

I think it must be related to an observation that many weather prognosticators and climate change skeptics' declarations cannot be distinguished from the results of numerological computations on facts about clowns.

Masters writings of course are the very antithesis of this approach -- they're as scientific, coherent and data-based as it is possible to get.

Friday, October 15, 2010

Analogy of the day... 

Macroeconomics is like climatology. A discipline with a core of practitioners who are seriously trying to be scientific, but which is profoundly hampered by the impact of its predictions on public policy. To the politically-minded, whether the prediction is true or not doesn't matter, only whether it agrees with the party line. That is, the political approach is to use "outcomes-based reasoning" which doesn't need to be self-consistent or significantly reality-based. Believe it or don't.

The New York Times doesn't. An article by David Segal concludes that economics isn't really trying to be successful -- people are just too complicated. He quotes Duke University professor and specialist in behavioral economics Dan Ariely, who says "...the economy is a hugely complex problem. So we either simplify the problem and offer a solution, or embrace the complexity and do nothing.” Or as I say sarcastically, "if at first you don't succeed, give up."

Sunday, October 10, 2010

Amateurs study cryptography, professionals study economics -- true experts study accounting 

The original slogan is "amateurs study cryptography, professionals study economics" -- it is the security version of the "amateurs study X, professionals study Y" pattern. The most famous one is "when amateur generals talk about military affairs over a few drinks, they discuss strategy and tactics; when real generals talk about military affairs over a few drinks, they discuss doctrine and logistics", but that topic is for a different post.

The original version has been tracked down to a short blog post by Alan Schiffman in 2004 by Adam Shostack and Andrew Stewart in their valuable book The New School of Information Security. But the economics of security, which is now a well-established subfield, is a topic for a different post, too.

It should be self-evident that it doesn't make sense to spend more money securing an asset than the asset itself is worth. But how do you know how much an asset is worth? That's accounting. Accounting for information assets is hard, and not the least reason is that information security professionals are often located in the organization in a position where they don't have access to the business information that ostensibly captures the values of each company asset. If the company is a public one, its regular shareholder statements contain a balance sheet that lists the totals, but the breakdown of the components that go into those totals is often a closely held executive secret. It's also hard because a company's information systems are deeply involved in the creation and securing of intangible values like intellectual property and brand value.

The folks at Risk Management Insight, one of the most experienced, insightful and methodical teams of risk analysts around, have posted a blog entry on how they still are pretty clueless after all these years. They know that looking at losses is one way to force people to think carefully about value, but they haven't yet gotten to the point of relating these losses to Generally Accepted Accounting Principles that their management is obligated to use to compute the balance sheet. Lots of opportunity for research remaining...


Saturday, October 02, 2010

Cyber Attack Threat Map 

I'm only 4 months late mentioning it, but SANS has published the 2010 edition of their threat taxonomy poster. Now, how do you add them all up to find out the magnitude, intensity, and skill level of the total threat barrage that's being thrown at your systems? As far as I know, nobody has any method that's even internally self-consistent, much less capable of encompassing the complexity of the combinations that these pose against an enterprise of any significance.

Tuesday, September 28, 2010

Security is a wicked problem 

Tractable, intractable; easy, hard; well-posed, ill-posed; tame, wicked. A classic paper by Horst Rittel and Melvin Webber (1973), titled Dilemmas in a general theory of planning first articulated the distinction. Tame problems have a clear, singular goal, and they have answers that can be recognized definitively when they appear. Nobody may know how to prove P ~= NP, but it will be very clear when the answer is known. Wicked problems don't have either of these properties. Nobody can give a good answer to the perennial question "is my organization secure?" or even its more realistic formulation "is my organization secure enough?" But we keep plugging away, because the alternative is to accept chaos and destruction.

Monday, September 06, 2010

Atlas Shrugged - nothing happened 

That's because the world is holding itself up, and Atlas was under delusions of essentiality. I recently reread Ayn Rand's famous novel, and found myself remarking how empty all these characters really are. The esssential observation is that all the "heroic" characters are basically childless orphans. This is only possible because they have a philosophy of life that is disconnected from the fact that they are biological organisms descended from ancestors who took care of their children without regard for the payback those children might provide to them.

These characters' philosophy of money is also defective. Money has no intrinsic value, it only acquires value when it is used as an intermediary in transactions. They scorn people who obtain wealth without earning it, yet the miner who picked gold up out of a stream in California or Alaska had done nothing to earn it. The acquisition of gold without work is of course why there were gold rushes in 1849 and 1889. Money whose value is based on some material object is subject to supply and demand just like any other commodity. The whole point of money is that it can be used to buy anything, and in order to make it applicable to anything, money has to be a virtual object whose value is kept constant by fiat. In normal times, when people's work adds value to a product, it creates more value in the world, and that value has to be matched by the creation of an equivalent amount of money. If no new money is created, there is less money to go around than the value in circulation, and the money itself becomes more valuable, i.e. deflation occurs. When a very productive society creates value faster than some material carrier such as gold or silver can be dug out of the ground and refined, people who have stored the accumulated value of their past work in that carrier lose some of their value, which is unfair to them and bad for society as a whole. "Gold bugs" who insist that "fiat money" is somehow evil cannot follow this logic. My best explanation for them is that they have not fully advanced to a stage of cognitive development that is able to understand abstract concepts. If they can't hold it in their hands, it's not real. Unfortunately this class of people constitutes a very large fraction of mankind, including of course Ayn Rand and all the characters in her novels.

The third conceit in Atlas Shrugged is that there are just a few honest, productive people in the world. This is not a new concept; we've had it around since the Greek myth of Diogenes wandering the land with a lantern trying to find someone, anyone with an honest face. It only works in Rand's novel because John Galt is literally a deus ex machina, whose magical motor could solve the world's problems if only he could be convinced of the worthiness of the people upon whom his beneficence would be bestowed.

The fundamental fact of systems governed by natural selection, including economic systems, is that they grow by themselves. There aren't any secret cabals of rulers, financial or engineering, who can be taken away and the system will suddenly and catastrophically collapse.

Nobody is in charge of the global economy. This is very hard for some people to take. If it's not their own father who was in charge of the family's lives for many years, it's their paternalistic boss, or their governmental head, or a heavenly father who guides all things. Atlas Shrugged asserts that it's a few competent industrialists in a 1930's era economy: a miner, an oilman, a steel mill owner, a railroad manager, a banker (what? Bankers count as productive citizens?), a judge, a philosopher, and over them all, a superhuman inventor who disdainfully repairs the very electronic torture machine that is being used to coerce him into running the country. It's an entertaining, if tortuously lengthy story. Woe betide him who takes it seriously.

Saturday, September 04, 2010

The logic of denialism 

I was discussing climate change denialism with some colleagues at work and realized the following list of excuses works for about any topic. It's commonly used by defense lawyers in cases of corporate wrongdoing, but I've never seen it briefly summarized.
  1. it never happened
  2. even if it happened in the past, it's not happening now
  3. even if it's happening now, it's not due to anything we did
  4. even if it is due to something we did there's nothing that can be done about it
  5. even if there's something that can be done about it, it shouldn't be done for other reasons
  6. even if it is due to something we did, it wasn't with malicious intentions and we shouldn't be held responsible
  7. even if something should be done, we shouldn't have to pay, somebody else should
  8. even if we ought to pay for the fix, paying will consume all of our profits and we'll go bankrupt and then somebody else will have to pay anyway
  9. even if we won't go bankrupt, our profits will be reduced, and this is bad for the country if not for the world
  10. solving the problem is revenue-neutral, we could get a lot of good press and "brand reputation" if we fixed it
  11. hey, we could increase our profits if we really fixed this problem
My perception is that the denialist side of the global climate change argument is at stage 3 or 4, while the environmentalist side is mostly at stage 10, and thinks that's good enough. It leads to a value of perception over reality and a lot of "greenwashing" marketing of cosmetic changes that don't actually affect the real problem.

Saturday, July 24, 2010

The pending carbon regulations 

Environmental politics pundits are all sad about how Congress isn't going to work on CO2-limitation legislation for awhile. I think they need to go back to their childhood fables and reread the story of Brer Rabbit and the Tar Baby. Republicans think they've caught this legislation in a Tar Baby of obstruction and filibusters. Now the Obama Administration is getting thrown into the briar patch of EPA regulations. Foreign Policy's Steve Levine has gotten an administration official to explain the bureaucratic strategy, but he doesn't make the connection.

When the EPA first made its finding that the climate impact of anthropogenic CO2 and 5 other greenhouse gases endangers the health of U.S. citizens (the "CAA endangerment finding"), the Obama administration made it clear that if Congress didn't produce legislation, the EPA would act unilaterally. No deficit-reducing taxes, no free-market cap and trade framework, simply a flat limit on emissions, just like benzene, ozone and other pollutants. "Please don't throw me into the briar patch!"

Friday, December 11, 2009

ODBC password encryption 

A colleague recently reminded me of a fact that shows once again how dark these cybersecurity ages are. ODBC password encryption is an oxymoron -- there isn't any encryption. Here's what a widely reprinted FAQ answer states:
How secure is ODBC?

Any ODBC sniffer will be able to trace everything from an ODBC perspective. This includes data, usernames, passwords etc. However, if you are using an ODBC driver that provides encryption, you can increase your level of security.

Since any front-end tool can effectively connect to and modify your databases, you need to enforce security at the server level.

On the other hand, if you use TCP/IP, ODBC security should be the least of your concerns!

It should be massively embarrassing to every security professional that the basic rule of never transmitting or storing passwords in clear text still doesn't have a standard, default implementation even now, many years after the first ODBC specification was published in 1992. The fact that ODBC is really an API, and not a network protocol, and that it was created for a non-networked environment where communication between the client process and the DBMS would take occur in the within-system interprocess communication framework, via OS traps using shared memory or intrasystem messages where security can be rigorously enforced, instead of the modern environment where database client and server processes run on different computers with an open, possibly hostile network in between, is not really an excuse. Vendors have had seventeen years to work this out!

Some ODBC libraries do support SSL session encryption, and if you encrypt everything, then passwords get encrypted too. But passwords should be encrypted always and everywhere. If every OS was able to figure out that this is required decades ago, DBMS products should be able to figure it out, too.

Monday, December 07, 2009

Hydrogen-compressed natural gas blends 

It turns out that the name for blends of hydrogen and methane that I used in a previous post is already trademarked by The Hythane Company LLC, and is specific to a blend of 20% hydrogen and 80% methane.

While ownership of the name is good for that company, it's bad for the industry, which has to use some other, less felicitous term, such as HCNG, which is used by NREL. DOE’s Advanced Vehicle Testing Activity (AVTA) spells it H/CNG, and has vehicles using 15%, 30%, 50% and 100% hydrogen.

Sunday, November 29, 2009

Academic research in security - misguided again 

A few weeks ago, Science magazine, one of the most prestigious general-readership journals (if you can call a polymath scientist a "general reader") published a short article in its perspective section by two of the most eminent computer engineering researchers in the US, William Wulf and Anita Jones, about computer security, titled "Reflections on Cybersecurity".Their summary is almost accurate "Cyberspace is less secure than it was 40 years ago. That is not to say that no progress has been made—cryptography is much better, for example. But more vital information is accessible on networked computers, and the consequences of intrusion can therefore be much higher. A fresh approach is needed if the situation is to improve materially." And their discussion, behind a membership barrier or a typically outrageous $15.00/day per article pay-per-view fee, is generally correct. They list a number of ways that security goes wrong even with the best designs and the best methods.

Their error is in their conclusion, that public key cryptography is the miracle cure: "we conjecture that by providing just a way of accessing the public key of an object, one could build an arbitrary end-to-end security policy." Yes you can probably build an arbitrary end-to-end security policy, but in my experience with public key infrastructures, it will be intractably complex, in the technical sense of being NP-hard to administer in all but trivial usage structures. This is the same kind of error that occurs in real life with role-based access control schemes: for naturally occurring organizations rather than artificial examples, you quickly end up with more roles than people, and the system, though elegant, costs more to operate and administer than the messy environment that you started with.

Any system with crystalline simplicity such as the one that Wulf and Jones are looking for will have the brittleness of crystals, too. Strike it at just the right angle and it will fail disastrously. They have failed to recognize the key design decision by Tim Berners-Lee that made the World Wide Web scale so remarkably. Unlike nearly all previous hypertext systems, the WWW does not automatically create backlinks with every forward link, and it doesn't automatically update links with their targets change or go away completely. The Web expects errors and deals with them routinely. Even the very advanced semantic web, which is otherwise little more than a type system for XML objects, expects to see uncomputable type specifications and deals with them routinely.

If academic researchers want to make significant advances in security, they need to come to grips with the notion of "robustness" and not confuse it with "simplicity", which although it is very similar in that simple systems are often easy to make robust, they're not the same. Two of the most robust systems we understand, the immune system and the behavioral programming of the nervous system, are also among the most complex systems known.


Friday, October 23, 2009

Why economists should be opposed to nuclear power 

Because the external costs are outrageously high. Not only for nuclear, but for lots of other energy technologies, too.

No principled economist should be for nuclear energy, because its costs are dominated by serious aspects with extremely long tailed statistical distributions. Unlike chemicals such as PCBs where the cost of projects such as the cleanup of sediments in the Hudson River is merely unimaginably huge, there has never been a cleanup of a nuclear site so successful that it’s now suitable for residential use.

Other chemical disasters also have infinite costs — consider the permanent loss of the entire town of Times Beach, Missouri due to dioxin contamination. It’s also true that the costs associated with coal tailings and other mining wastes have equally long tails. Picher, Oklahoma is being abandoned due to mountains of toxic tin mine tailings that cannot be cleaned up at a cost less than the total value of the town.

We cannot base a permanent energy economy on extraction-based activities that cause progressive, permanent damage to the environment — sooner or later we’ll end up with all of the environment contaminated, and we’ll have no good places left for ourselves. If you like nuclear energy, we already have a wonderful source of fusion energy that produces far more power than we’ve been able to capture so far, and it keeps its waste to itself, at a safe distance of 93 million miles. Photovoltaic, solar thermal, wind, hydro, and wave energy produce no toxic waste needing cleanup after the plants have completed their lifespans. Not to mention photolysis of water to produce hydrogen, which has a nice promise to make a chemical fuel in home power plants for people who have an emotional need for a viciously roaring internal combustion engine in their car rather than a meekly quiet electric motor. But solar hydrogen technology is much less farther along than the other renewable ones.

Natural gas is a useful low-carbon fuel, but it can only be a transitional stage to a fully sustainable energy economy.


Tuesday, September 15, 2009

Path to a hydrogen-based energy economy 

It's all about aligning supply and demand.

The U.S. Energy Secretary, Paul Chu, has put the government on a path to a renewable, carbon-free energy ecosystem that is based on electricity and battery storage for stationary and short-distance transportation, and biofuels for long-distance transportation. This is a perfectly valid path but it's not the only one.

H.R.1622 was passed unanimously by the House and referred to the Senate Energy Committee on July 21. This bill directs the energy secretary to implement a 5-year program to enhance the capabilities of Natural Gas Vehicles (NGV) in 12 areas, including fuel storage, fueling stations and NGV-electric hybrids. These capabilities are a necessary next step, but they don't provide big picture that gets us to where we need to go. Here's a sketch of a path that does. There's a lot more to this picture than there is space for here. The National Renewable Energy Laboratory has done a lot of heavy lifting in this area.

  1. Expand interstate infrastructure for Compressed Natural Gas transportation, driven by demand from long-haul truck lines and by supply pressure from natural gas producing companies
  2. Develop capability of CNG motors, based on demand from trucking companies
  3. Provide CNG motors in autos, based on fuel-management technology developed for trucks. Just like diesel fuel, CNG cars can drive up to the truck pumps at the fuel station. Home fuel stations become viable for those homes that have gas heat.
  4. Deploy hydrogen-enhanced "Hythane" fuel. Hydrogen can be obtained by steam reformation of methane with carbon capture, or by direct production of hydrogen from water
  5. Develop "Hy-flex" engines that can run on any blend of hydrogen and methane from 100% CNG to 100% hydrogen. At this point pure hydrogen fuel stations become a technically viable proposition.
  6. Prohibit pure CNG
  7. Progressively reduce the allowed proportion of methane in Hythane fuel.
  8. Allowed proportion reaches 0%, prohibiting methane in compressed-gas fuel. Done!
We need to use the compressed-gas path rather than the liquified natural gas path, because of the vast difference in boiling points of hydrogen and methane. Liquid hydrogen in cars and trucks will probably never happen. Nor will exotic solid-state hydrogen storage systems until we get to the pure hydrogen mode; they do not offer the flex-fuel capability needed to bootstrap their technology into large-scale use.

Saturday, August 08, 2009

Too complex to exist 

I just ran across this excellent article from mid-June by Duncan Watts at Yahoo Research, summarizing the arguments for breaking up financial institutions whose failure would cause major disruptions in the national or international economy. The comments are unusually good, as well, including one from Bob Metcalfe who professes to not be aware of how enamored social media entrepreneurs are about his eponymous law.

Interesting on its own, the argument also applies to IT risk management. CIOs like to simplify their systems, for many good reasons, including security reasons. The farther the system gets from being analyzable by the security staff, the more likely it is that it will contain a critical vulnerability that isn't being adequately addressed.

But they need to be sure that they don't simplify too much. We all know the maxim about not "putting all your eggs in one basket." CIO's like to say "we're an XX shop" where XX is IBM or Windows or SAP, but whenever they do this they're admitting that they're not only at the mercy of that vendor but also at the mercy of any cybercriminal who holds an undisclosed zero-day exploit. If an application or infrastructure component is so essential to the business that if it went down the business would also have to shut down, then that application or component probably needs to be partitioned, modularized, and diversified so that any single failure is not catastrophic.

Sunday, July 26, 2009

Health destruction systems 

At last, someone explains the perversity of "market-based" health care systems. And it's Paul Krugman, in his New York Times blog at http://krugman.blogs.nytimes.com/2009/07/25/why-markets-cant-cure-healthcare/. If you've ever tried to shop around for a better price on blood tests or X-rays, you'll recognize what he's talking about.

It's shocking, although I have to say not really surprising, that so much of the discussion in the debate about restructuring our healthcare system is about how to maintain the profits of the insurance companies at the expense of the health of U.S. citizens.

The other perversity of the current system is the fee-for-service model, that pays more for delivery of more procedures, regardless of whether they actually do any good for the patient.

The original vision for Health Maintenance Organizations was that they could reduce costs by keeping their subscribers healthy. Healthy people don't need treatment as often, so by providing programs that keep subscribers from getting sick, HMO's could reduce the amount of money they would spend on treatments. But they discovered that prevention programs have overhead -- they actually had to engage with their subscribers regularly, and convincing subscribers to stop doing unhealthy things and start doing healthy things was complicated and took work. It was much easier to simply deny care when they got ill, or better yet exclude people who were likely to get sick in the first place. If your HMO only accepts healthy subscribers, payments for treatments are low and their subscriber fees are mostly profit. So HMO's became care-denial organizations. This acted to counterbalance the motivations for unnecessary treatments, but it didn't do anything to keep patients healthy.

In order for the United States to have a healthcare system that promotes the health of citizens instead of working against them, we have to identify those portions of the system that are incentivized to work against the interests of the end-users and either reverse those incentives or eliminate those portions entirely. I don't know of a structure that does that other than a government-administered single-payer system. Yes, government is inefficient, but it could hardly be more inefficient than the current system that is full of middlemen and where every insurance company has its own unique set of forms for doctors to struggle with when they should be focusing on their patients, and the "statement of benefits" from the insurance company has 3 different prices for every line item.

Tuesday, June 02, 2009

PCI "death penalty" 

The Payment Card Industry Security Standards Council has a framework of penalties for violations of their Data Security Standard. Some of these are explicit, while some are less obvious. Most obviously, if a merchant fails a DSS audit, fines can be imposed. But "merchant banks" have other ways to impose penalties. They can raise the per-transaction charges that occur every time they process a use of a card. They can raise the fraud management charges that come along with the privilege to accept credit cards. Merchants would much prefer to pay the hidden charges since they don't involve the public shame of having a fine imposed. If customers hear the you've been fined for security problems, many of them will take their business elsewhere.

The most serious penalties are the "death penalty" class that will cause the company to go out of business. The accounting firm Arthur Andersen might well have survived the scandal of its malfeasance with Enron, until the State of Texas withdrew its license to practice accounting. Death penalties have been imposed for credit card security violations very rarely. One of them has been Cardsystems, which had its card-processing permission withdrawn by VISA and other issuers, and consequently went bankrupt.

Trying to recover money from any conceivable source, the "merchant bank" that Cardsystems worked with has now sued the company that audited its security and certified that it was compliant with the DSS. Wired's Threat Level blog has more details on this story. Being compliant with the DSS doesn't guarantee that you are secure; there are many loopholes in the standard that can be exploited by someone who's trying to pass its audit rather than secure their customers' data. It will be interesting to see how this plays out.



Tuesday, May 19, 2009

Threat taxonomies 

The Open Group has recently released a Risk Taxonomy. Taxonomies are important because they allow you to keep track of all the different variants of situations that may be encountered, making sure that any "generic" solution really covers all of the bases, and they allow you to base your analysis on lessons learned from similar situations, refining your response rather than having to reinvent it from scratch every time. Most importantly, they give the big picture, counteracting the tendency of technical people to dive into the details and never look up.

Other risk or threat taxonomies can be found in:
  1. U.S. NIST SP 800-30 "Risk Management Guide for Information Technology Systems"
  2. SANS has a "What works" poster series that was organized by threat a few years ago. Unfortunately that perspective is gone from the latest version.
The threat taxonomy that I use is organized by the class of goals the attacker has:

Saturday, April 18, 2009

Surprise-resistant 

The Financial Times has a story by Nassim Nicholas Taleb on Ten Principles for a Black Swan Proof World. These have a lot of implications for information security.
  1. What is fragile should break early while it is still small. Computerized systems always break; they need to be built so that any component, including the hardware and the OS, will not cause the system to fail if every instance of that component fails.
  2. No socialisation of losses and privatisation of gains. We haven't had a case where a computer systems needed a government bailout. Let's hope we never do.
  3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus. PCI and HIPAA penalties for data breaches need to be much more severe than the slaps on the wrists that are given these days.
  4. Do not let someone making an “incentive” bonus manage a nuclear plant – or your financial risks. CISO's should never report to the CIO. CIO's are paid to reduce IT costs; if they can do so by ignoring risks, they will.
  5. Counter-balance complexity with simplicity. Information systems are the most complex systems in any enterprise. Every time some local solution is added in because it's too hard to make a global change, risk increases.
  6. Do not give children sticks of dynamite, even if they come with a warning. While IT users may be system admins of their PCs at home, they should not be given that privilege over the systems they use at work.
  7. Only Ponzi schemes should depend on confidence. Governments should never need to “restore confidence”. If your IT systems are so complex that their risk can't be analyzed by different members of your security staff and yield the same risk results, you can't manage their risks consistently.
  8. Do not give an addict more drugs if he has withdrawal pains. Buying more security products does not often produce greater security.
  9. Citizens should not depend on financial assets or fallible “expert” advice for their retirement. If a security "consultant" uses some proprietary method that he can't teach to a company's security staff, he's likely to be making it up as he goes along.
  10. Make an omelette with the broken eggs. Don't remediate security weaknesses by patching on more controls, redesign the systems so that they are naturally secure.

Saturday, March 28, 2009

Laws of evolution 

The Texas Board of Education has adopted language on the teaching of science that aren't as anti-science as many had feared, reports Garry Scharrer of the San Antonio Express-News in a story titled "Teaching evolution now protected". But scientists still are not communicating the principles of evolution in such a way that they are self-evident. Let's give it a try here...

Evolution by natural selection is a natural phenomenon with the same status as heat flow, which has its own Laws of Thermodynamics. It's a statistical statement about aggregate properties of groups of individuals, which in the case of thermodynamics are atoms, and in the case of evolution are biological organisms. Once it's understood clearly and carefully, what was originally an empirical generalization turns out to be a mathematical truth as incontrovertible as the fact that 2+3 is greater than either 2 or 3.

The laws of evolution apply to any entity that follows the first law, whether they are biological organisms, cultural memes, or data structures in an evolutionary algorithm in a computer.

Charles Darwin's great achievement was the discovery of the principles of evolution by examination of the fossil record and other sources. To the politically-minded, Darwinism is the recognition that the fossil record shows how evolution occurred in biological organisms.

In the 150 years since the publication of The Origin of Species, the theory of evolution has itself evolved, into a "modern synthesis" that is 60 years old now, incorporating molecular biology and population genetics. This theory (Huxleyism if you have to ideologize) recognizes that errors in DNA replication and recombination will lead to evolution, regardless of any evidence in the fossil record.

The latest features of evolutionary theory, still in progress under the banner of a wierd name, evo-devo, are the incorporation of developmental lifecycles into the organization of the traits that natural selection acts upon.

Update: I probably ought to mention, since I cite the laws of thermodynamics, that life occurs in an open system, on the slopes of entropy gradients, not in the closed system that the second law of thermodynamics applies to.

Also, Christopher Hitchens has a commentary about the Texas Board of Education decision. Although always entertaining, Hitchens doesn't actually add much light to the debate.


This page is powered by Blogger. Isn't yours?