Tuesday, October 08, 2024

Improving the best grilled cheese sandwich

In 2022, I wrote an article on Medium about The Two Secrets to the Best Grilled Cheese Sandwich. The two secrets are to do these two simple things: (1) use mayonnaise instead of butter, and (2) use real American cheese.  I included a bonus secret: use plain white, square sandwich bread.

That's about it. It's near perfection.  But "near perfection" still leaves room for improvement, or at least variation without loss of quality.  Here are two more things you can do to add variety to the perfect grilled cheese.

Using two slices of cheese, of course, sprinkle some crumbled feta or blue cheese or gorgonzola (which is more green than blue, to my eyes) between the slices before assembling the sandwich.  This will add a bit of salt and umami flavors, although American cheese is already pretty salty.

Try Fontina cheese instead of American.  This melts just about as well, but with a bit different flavor profile.  You could also try Mexican Chihuahua cheese, but you may have to go to a Mexican supermercado to get it.  It's a rare find in a mainstream supermarket, even in the Southwest US, where they often have an entire section of Mexican cheeses.

When something is so near perfect, almost every change is a downgrade.  These tweaks aren't downgrades, which is something else.

Tuesday, June 18, 2024

Should Companies Be Owned by Their Workers?

 Freakonomics Radio asked this question last month.  They weren't sure.  But they thought it was interesting to interview Pete Stavros, a senior executive at private capital firm KKR.  Stavros observed that employee-owned companies were 2% more profitable than conventionally owned ones, and he convinced KKR to give a form of employee equity to some of their own companies.  Stavros seems sincere in his belief that he's found a win-win combination that benefits both the private equity firm and the employees, but if you think about it a bit more deeply, you discover that his pitch is an illusion.

When it institutes employee ownership, it gives employees shares in the company, which gives them a stake in the company's success.  The idea that owners must inevitably exploit their workers in order to maximize profits is deeply ingrained in the thinking of KKR's kind of capitalist, so much that they were surprised that workers with a stake in their company were more productive.  Communist Russian dictator Josef Stalin was also surprised, when he allowed families to farm small plots of land for their individual use, in addition to the faceless collective that the New Soviet Man would slave for, and the private farms were more productive.

But unlike the Stalin-era farmer, and unlike traditional Employee Stock Ownership Programs (ESOPs), the kind of ownership given to KKR workers is an illusion, not as good as profit sharing (profit sharing is also good, despite it not coming with any "ownership"), since it exposes the workers to capital losses as well as capital gains, and gives the true owners, the KKR managers, another way to blame the workers instead of themselves when purchased companies ultimately fail.

"Ownership" is real only when it includes control.  This would mean giving the employees a seat on the board of directors, and giving employee shares voting power for board seats in the same way that shares in conventional public companies come with the ability and responsibility to vote for board members.  You can bet the KKR would never go for such an arrangement.

The best form of employee ownership, even beyond ESOPs, is to become a fully owned employee co-op.  But even more than ESOPs, co-ops are complicated to set up, and their advantages may even be opposed by deep aspects of human psychology.  Investigating that will be another blog post.

Monday, May 27, 2024

Why the Xerox Star failed

"That trick never works!" - Rocket J. Squirrel

or

Single-purpose machines built by lone geniuses for other lone geniuses don't scale.

The Xerox Star was an attempt to commercialize the pioneering computers that emerged from Xerox's PARC labs in the 1970s.  They started with machines called the Alto, and continued with Dorados, Dolphins, Dandelions, and Daybreaks, which came to be called D-Machines, all running (mostly) the same software.  They were a marvel for their day, and had all of the components of today's windowing PCs from Apple and Microsoft, but years earlier. The story of how the Star came to be and how Xerox came to abandon it is a complex one, and like the system itself, the full story is distributed across many obscure places that are not entirely consistent with each other.

But I think that the big picture reasons for its failure get lost in the soap opera details, and are worth a few words.  Obviously, Star had to be released too soon for its technology. Many of the reasons for its lack of broad adoption can be traced to the speed of its underlying electronics; and much of the architectural and usability weirdness of the various models of computer in its history can be traced to attempts to compensate for this basic problem.  Fifteen years later, Apple introduced the Macintosh, with almost the same capabilities, but with a commodity microprocessor CPU instead of custom designs made from bit-slice processor elements, at a tenth the price.  The Star had died out by then, but the Mac continues on to this day.

What kicked the Mac over the top into the realm of success was its developer ecosystem.  Technically, the Macintosh and Star developer models were very similar.  In the early days of the Mac, it wasn't possible to use it for development of its own Mac software -- developers had to use the substantially more expensive and powerful Lisa, and then test and distribute their applications on a separate Macintosh unit.  This was the same process that had been used for the Star, where developers used a Mesa development environment, and then deployed the results to the Star environment.

The difference was that there were already a lot of Apple II developers who were excited to move up to the new and awesome Lisa system.   I was a grad student when the Lisa was released, and one of my friends got one -- he must have spent a substantial part of his meager life savings on it.  The business and research communities that were available to Xerox in its own time could not match that kind of commitment.

The community that developed the Star was small and exclusive. I happened to be attached to a PARC-adjacent computing community in the mid 1970s and there were rumors of amazing things going on over there, but exactly what was never clear. It was impossible to tell from the outside, but much later it emerged that the PARC computing community was internally even more fragmented than it appeared to be This turned out to be typical for an entire stream of computer research history, but the phenomenon was particularly acute at PARC, and its effects showed up in fundamental aspects of the D-machine architecture.

Notably, there wasn't really a "D-machine architecture". There were high-level visions of what might be possible in a world where every person had their own computer beside their desk, but everyone had a different version of that vision. PARC had been staffed with some of the most brilliant, creative, and independent computer researchers to be found, and they created Promethean hardware frameworks that could be elaborated into whatever their application vision required.

The result was that a given D-machine could become one of four different computers, depending on who was using it at any given time. A relatively tame instance provided for programming in the BCPL, Mesa, and Cedar languages under the control of a command line interface in its windowed development environment, Tajo. Starting the machine with a different microcode load would turn it into an office workstation providing a suite of word processing and publishing tools with the now-familiar windows, icons and folders metaphors, but without software development ability. Other microcode created a  Lisp Machine running Interlisp-D, or a dedicated Smalltalk machine. The microcode and even the device drivers for each environment were unique and incompatible and the user communities for each one didn't seem to talk to each other, and their writings rarely even acknowledge their counterparts' existence.

A D-machine gave its user a vast amount of control and flexibility -- it could become whatever you could imagine, if only you had the talent, skills, and time to build it before your tenure at the lab expired and you had to go somewhere else. Collecting brilliant people and giving them access to the most advanced tools and letting them work on whatever they think they can be the most productive at is a standard strategy for top tier research labs, and it often produces marvelous results. PARC produced many marvelous things. However, creativity is unpredictable, and unpredictability is largely unworkable as an element of business strategy.

Xerox management turned out to be unable to solve the problem of herding its houseful of cats in a coherent direction that led to business success, and their inability to settle on a common machine language that would be tolerated by all their diverse projects is emblematic of their failure to achieve focus. Each external developer community was too small to provide Xerox with the volunteer support that it needed in order to attain a critical mass that contributed to business innovation in the way that DECUS or SHARE did for DEC and IBM, or that Apple enjoyed in a less organized way, instead of draining energy and R&D funds.  Whether that critical mass was possible given the other limitations of the technology is impossible to say, but there's no evidence that Xerox or PARC management had any notion that customers could be creative contributors to a product in a way beyond passive consumption.  It would take another two or three technology cycles before the idea of a "platform" became a common sense aspect of business strategy.

Thursday, May 02, 2024

Two great lies in financial policy

It's not a lie if everyone believes it, is it?

I'm still not sure which of these should be first.  They've both been distorting financial policies for decades.  They have so much history, that about all I can do here is name them.  If you can break out of the reality distortion field that sustains them, the fact that they're lies becomes self-evident.  It's tempting to call them "myths", because for many people they're articles of faith that must not be questioned.  But experts know, or should know, that they're empirical claims that can be falsified. And they have been.  Here's another try. An expanded version of this note is at Medium.

Lie No. 1: The proper rate of inflation is 2% annually.

Economists repeat this so often, it must be true. But historians at the New York Times and elsewhere have traced the history of the number back to a guy in New Zealand (an important guy in NZ at the time) who admitted that he just picked it because it seemed intuitively reasonable.  There's no theory to support this number. Two percent seems reasonable if you don't think too hard, so everyone goes with it.

If you look just a little bit deeper than "what everybody who's important is saying", you'll find that the US Federal Reserve Act, as amended in 1977, requires price stability, i.e. an inflation rate of 0%, not 2%.  Then you look at prices in the US, which have been controlled by the Federal Reserve system's policies, and see that the dollar has lost about 80% of its value since then.  That doesn't look like stability to me.

Lie No. 2: Public companies are required to maximize their short term investor returns.

This lie has been debunked many times, but tenaciously persists.  People complain about it all the time, but the alternative never seems to sink in.  I don't know why the competing soundbite is so slippery. Here's the truth: All companies are required to do whatever their owners specify (including public companies). If the owners want to give its assets away to charity, or run the company into the ground for political or competitive reasons, that's their prerogative.

Milton Friedman is most famously associated with the "shareholder value" dictum. In the most generous reading, this lie is predicated on a misunderstanding of how shareholders communicate with management.  If they can only communicate by buying or selling their shares, then the amount of information that owners can communicate to management is incredibly limited.  A purchase means "good work", and a sale means "you're doing something wrong".  That's it.  Exactly what is being done well or badly is impossible to communicate.

Yet anyone who's actually become an owner by purchasing stock knows that they have purchased more influence than this, both formally and informally. Shareholders choose members of a corporation's board of directors, who choose senior managers to execute their desires.  They also specifically influence company policies by voting on shareholder initiative statements.

Alas, in many corporations, management has effective control of the board rather than vice versa. Board members are chosen by the CEO and then submissively ratified by the broader population of shareholders.

In addition, a vicious cycle of greed exists where sociopathic individuals whose goal is to accumulate more money regardless of the cost to others and to the society that enables their greed, consequently accumulate more financial power to acquire even more money even more rapidly.

Governments attempt to limit the destruction that this cycle causes by anti-monopoly laws and regulatory agencies and regulations intended to ensure that effective markets exist where multitudes of interests can contend and mutually damp each others' excesses.

In heavily financialized modern economies, most of the shares of public companies are owned by huge funds that are themselves public companies, and this distancing of ownership makes communication of corporate goals other than making more profit very difficult. 

Getting large, bureaucratic organizations to change their behavior requires organized efforts, slogans and acronyms. One set of important not-specifically-profit oriented goals has become known as ESG, for Environmental, Social, and Governance oriented investing.  A goal complex called DEI, for Diversity, Equity, and Inclusion, is following ESG as an investment strategy that looks beyond mere short term profit. Additional ways for companies to explicitly step away from the "profit is everything" philosophy are to become incorporated as a "benefit corporation" or certified as "B Corporation".  Curiously, as a non-profit organization B Lab, the company that administers B Corporation certification, is not itself certified as one.

So with organizations tracking and publicizing these measures the market should see how organizing for social benefit gives greater returns, and self-correct.  Easy, right?  Not if politicians get in the way. The conquest of the profit motive by social goals in both left and right wing politics will be the stuff of history for many years to come.

Friday, February 23, 2024

What would world civilization look like if the US collapses?

Doomers' worst nightmare: a sustainable mid-tech, high culture global civilization, plagued by endless failing genocides.

Civilization would survive just fine. But it might not be a robust high-tech 21st century civilization. That might actually be a good thing - it's hard to tell. 

I've written an essay explaining how I came to this conclusion.  Medium says it should take about 8 minutes to read.  But if that's too long for you, here's an extended summary.

The United States in early 2024 is in a political situation where collapse into a quasi civil war like "the troubles" in Ireland seems like a possibility.  Elected politicians in Texas are calling for military-aided defiance of Federal authorities, supported by governors of 25 other states.  But unlike the first US Civil War in the 1860s, there is no sign of the creation of large state armies to oppose the US Army, and the states themselves are internally divided to the point where a next war would be as much of a "war within the states" as a "war between the states".   Nobody in the Texas Legislature is proposing to fund the Texas Military Department to a level where it would pose more than symbolic opposition to Federal forces.  It's more likely that violent opposition to the United States would take the form of "stochastic terrorism" (I prefer the term "freelance terrorism") - bombings and random mass shootings. Whether these could become focused enough to target Federal buildings and political gatherings seems doubtful.

But it's interesting to imagine what might happen if the US went into a collapse as deep as the Great Depression of the 1920s, that somehow became permanent.

The global impact of US collapse would span five realms: general economic activity, social and cultural activity, geopolitics, technological development, and environmental stability.

The loss of the US as an economic force would severely but not seriously damage the global economy. The Dollar would lose its role as the world's reserve currency, and this would have a tremendous impact. The World Bank, the Euro, and the Chinese Renminbi are waiting to take over if the situation becomes intolerable, though.

Global culture would not be significantly affected. High culture of symphonic music, fine art, and fashion has always been ruled by Europe, and would stay that way. 

Geopolitically, the long-predicted end of the Pax Americana would finally be realized, though the Great Game of pre-WWI colonialism is gone forever, never to return.  The Mideast would continue to be the same mess of intra-Islamic jihadism that it's been since the end of the Ottoman Empire.  China's dominance in the Far East would finally be unquestionable.

Attacks on Taiwan would lead to a major technological setback, since the most powerful semiconductors are made there by TSMC. Software to use the computational power of those semiconductor devices might lose its creative momentum that originates in Silicon Valley, The tech giants are fully globalized and can easily migrate transactions and data from their already fortified datacenters to ones in less unstable areas.

Advanced electric power technology would easily be able to fill in the gap caused by the loss of the US.

When it comes to transportation, the US is no longer the uncontested leader in technology, but only a participant in a close race. The US is losing its lead in aerospace technology.  The US is not even in the running for the lead in advanced railroad technology. Automobile and truck technology has long been a global competition, and the loss of US auto manufacturing would wound employment in Mexico and Canada, but not significantly elsewhere.

The environment continues to be destroyed at a rate exceeding its restoration regardless of the details of civilizational conflicts, although there are macrotrends that act to slow the rate of destruction. 

As long as the High Income countries (aside from the chaos-plagued US) continue to produce pollution-reducing solutions, as Low and Middle Income Countries graduate into the upper tier (and assuming that the World Bank and OECD don't move the dividing lines) their improving governance and economic incentives will lead them to reduce their emissions as well.

As we sum up the effects of US chaos in the five realms of global civilization beyond climate, it appears that short of a global thermonuclear war, the chief threats are related to reduction of silicon and lithium processing capability for computers, photovoltaic power sources and batteries.  These capabilities are concentrated in the Western Pacific, and it's essential that the rest of the world build up resiliency against disruptions there.

As long as environmental and climate deterioration can be reversed, the worst that might happen would be a reversion to the American lifestyle that was pervasive in the 1970s, before everyone had PCs and smartphones. With Total Electric Homes and electric cars in garages, this could be quite tolerable.

Tuesday, February 20, 2024

Seven simple fixes for US politics

Simple, though totally not at all easy.  But in today's sound-bite environment, simple is a requirement. Half of these could be implanted by individual states without the super high threshold required for Constitutional amendments.

  1. Ranked choice, instant runoff voting. Reduces partisanship (parties hate this) and saves money.
  2. Single, open primaries. Runoff first, with a 2-candidate election from the finalists. An alternative or supplement to preference voting that further enhances voter choice. Parties hate this even more.
  3. Rule-based redistricting.  "Non-partisan commission, appointed by politicians" is an oxymoron.
  4. Population-weighted Senate composition, with a two-Senator baseline, and all seats elected "at large" statewide.  One person, one vote, not one state, two votes, yet preserves a Congress with two distinct Houses with differing perspectives. Fixes the inequities of the Electoral College for free.
  5. Term limits for all Federal elected offices. If it's good enough for the President, it's good enough for Congress and the Supreme Court. Even for the Supreme Court, "Serving during good behavior" notwithstanding, if individual retirement is allowed, then mandatory retirement is obviously also allowed. Mandatory retirement at the age of Social Security would be a bonus.
  6. Rotating membership in the Supreme Court. Keep the nine justices, but every election cycle retire the senior justice and install a new justice from all the justices of the Appellate Courts, selected at random from those who have yet served or from nine who have served least recently. If the Senate fails to confirm a nominee, a new nominee is selected from the Appellate Justices as before.
  7. Mandatory National Guard service. "A well regulated militia, being necessary to the security of a free state," requires that every able-bodied person who possesses a gun be properly trained and organized. This in no way impairs the right to keep arms, and enhances citizens' ability to effectively bear arms. Organizing refresher tours of service by random selection, just like jury duty, should not be excessively burdensome. Every new purchase of a weapon comes with free state-provided training. Free weapons are already provided to volunteer Militiamen; they should be allowed to keep de-automated ones when their tour ends.
The current American political system is not massively broken, but some fundamental defects that weren't intolerable in past eras have been exploited into severe problems. A bit of tuning is in order, and should make it substantially more robust.

Monday, January 01, 2024

Almost as good as free will

Stanford professor Robert Sapolsky has concluded that free will doesn't exist. I mostly agree.

Neurobiologists like Sapolsky, psychologists, and even computer scientists have realized that the brain has multiple components that independently make decisions in different domains, a point which seems to have eluded philosophers for generations.  Sapolsky's point about our inability to "choose what to choose" takes that dissociation far beyond most philosophers' thinking.

Notably missing from discussions about Sapolsky's ideas are the physicist's perspective.  The brain is a material object subject to the laws of quantum mechanics, which most physicists have realized is fully deterministic, following the Schrodinger and Dirac equations with incomprehensible complexity. In order to preserve free will in quantum theory, some creative physicists have concluded that "electrons have free will".

Yet even without absolute free will, our independence from the environment and other people that allows us to think and act on our own as individuals provides for an autonomous will, which should be good enough for practical and legal purposes.

Unrelated: Happy 2024!


Tuesday, December 05, 2023

The Byzantine Generals Problem also applies to politics with lies and misinformation

The classic work on the Byzantine Generals problem, arose in the context of fault-tolerant computing.  The Wikipedia entry on the topic is titled Byzantine Fault.   Thinking about the problem for reasons that I can't recall, I recently realized that it can apply to political systems infested with lies and misinformation. Studies of this aspect are hard to find, if they exist at all.  

Leslie Lamport's 1982 paper is concerned strictly with systems that use only point-to-point communications, rather than political situations where miscommunications are broadcast to audiences of various sizes. Its successors are (almost?) exclusively about improvements to the amount of messages needed to be sent to prevent any faults at all from being concluded.  The remainder are concerned with the consensus mechanisms for cybercurrencies, and rarely go into any mathematical depth about the consensus formation problem itself.  I expected to find discussions of this in the economics or political science literature, but my web search skills, such as they are, didn't uncover any.  Maybe their vocabulary is totally disjoint from the computer science vocabulary?

What political scientists should want to know are things like how the probability of a false consensus varies as the probabilities of any particular general generating a lie, and the number of variably lying generals changes.  If everyone lies, but nobody lies very often, how much worse or better is that then a situation where some generals lie all the time?

The least bad news is that autocracies can be consistently subverted if at least 1/3 of the "lieutenants" fail to follow the generalissimo. The Achilles heel of all the variations seems to be vote-counting systems. Open voting, like legislative roll call votes, appears to be most robust to miscounts. Open counting of secret ballots can also work. It's why vote-counting machines must be fully open source.


Wednesday, November 22, 2023

Prometheus Unbound - the future of AGI after the OpenAI board upset

For tech spectators and AI participants it's been an exciting weekend.  The dust has not fully settled, but it appears that the most influential AI company will end up keeping its original CEO, but with a new board of directors.  All pretense of being seriously not-really-for-profit and concerned with "AI safety" (whatever that is) is now gone. It may be too soon to understand the detailed ramifications of these changes; some reports have the goal of the latest version of OpenAI's board as tripling its size and reworking yet again its organizational structure to give Sam Altman de jure control in addition to his demonstrated de facto control, with Microsoft playing a more official role this go-round.  Time will tell.

People come and go, but the industry landscape is really determined by the insatiable demand of AI for compute cycles.  To understand AI power relations, "follow the money" turns into "follow the chips".  Who's got the chips now?  Nvidia and Microsoft have introduced a new generation of them. Nvidia's H200 displaces the H100, which now becomes last year's ancient history.  Microsoft introduced its Maia 100 chip a week ago. Google has had its TensorFlow chips for years, as has Amazon AWS with its Inferentia and Trainium chips.  Nvidia powers the vast majority of other AI engines, including those from OpenAI.

If you look at it from the chips and datacenters perspective, the future becomes easy to see.  Instead of a single AGI ruling the universe, we will have a handful of titanic AGI's ostensibly ruling from Silicon Valley instead of the Greek Mount Othrys, although their datacenters are really dispersed worldwide. Unlike the ancient gods, these artificial gods will be under the at least nominal control of their respective corporate masters.

The outcome of the "AI alignment" debate is also now clear. AGI development will be aligned not with "humanity" but with capitalism.  Many people will become wealthy as a result, and a few people will become unimaginably wealthy.  The future of humanity under capitalism has been uncertain for 150 years now, the advent of AGI doesn't really change this.

Thursday, October 19, 2023

Post-modern origin of species

In the 1860s we had Charles Darwin's ideas.  Then in the 1940s we had a "modern synthesis" of genetics and population biology.  And in the 1970s and 1980s this was tied to molecular genetics. Now we have attempts to describe speciation in even more fundamental terms.  Forty years later -- t's about time.

Recently two papers have appeared in Nature and PNAS that attempt to show how to identify when natural selection is occurring in an abstract sense that could help to understand how living systems arise from non-biological systems - abiogenesis. They're not quite successful.  In fact, they're so abstract that people are having trouble figuring out even what these theories are trying to do.

An article in Ars Technica is an example of this confusion.  From the perspective of "publish or perish", this is a good thing, since it means that there are plenty of opportunities for easy papers explaining and correcting.

From my perspective, the problem is that they're phenomenological, showing how to recognize species, but not explaining why distinct species should even exist, or how they come about.  They both assume that species come about via natural selection, although there are other mechanisms that can preferentially increase the population of some kinds of objects within a broader spectrum of varieties.  When objects are created and destroyed via some inaccurate replication process, some varieties will take less energy to create, and some will last longer once they're created.  These are called "thermodynamically preferred" varieties, and the laws of non-equilibrium thermodynamics (which are mathematical laws, not physical ones) will determine how fast the populations of these varieties grow and decline.

Then in chemical systems, you'll find that at some varieties have catalytic properties that amplify the rates of creation of other varieties, and, rarely, autocatalytic properties that amplify the rates of creation of themselves. The autocatalytic property may be distributed across a loop or network of reactions in a hypercycle.  Neither paper provides a way to recognize or measure the existence or power of the autocatalytic advantage, although the PNAS paper would ascribe a "function" to it, once it's recognized.

That paper tries to focus on "function", but the focus doesn't really achieve the needed sharpness, because the word is ambiguous.  Human brains are hardwired to see goal-oriented phenomena in as many places as possible. But for most of the history of life, goals didn't exist, and things happened because they followed inevitably from the way they were in the past, rather than happening in order to change the future by approaching an internally represented target state.  In attempting to create an objective definition for function, the paper almost escapes this teleological trap, but you can tell from the uses of the term in other places that the authors hearts haven't really accepted the concept.  Many of the comments to the Ars Technica story attribute this to a conflict of interest with quasi-religious goals of the foundation that funded much of the authors' work.

That's too bad.  The slogan "It goes by itself" needs to become as much of everyone's way of thinking as Galileo's "Nevertheless, it moves" did 400 years ago.


Improving the best grilled cheese sandwich

In 2022, I wrote an article on Medium about The Two Secrets to the Best Grilled Cheese Sandwich . The two secrets are to do these two simple...