Friday, February 23, 2024

What would world civilization look like if the US collapses?

Doomers' worst nightmare: a sustainable mid-tech, high culture global civilization, plagued by endless failing genocides.

Civilization would survive just fine. But it might not be a robust high-tech 21st century civilization. That might actually be a good thing - it's hard to tell. 

I've written an essay explaining how I came to this conclusion.  Medium says it should take about 8 minutes to read.  But if that's too long for you, here's an extended summary.

The United States in early 2024 is in a political situation where collapse into a quasi civil war like "the troubles" in Ireland seems like a possibility.  Elected politicians in Texas are calling for military-aided defiance of Federal authorities, supported by governors of 25 other states.  But unlike the first US Civil War in the 1860s, there is no sign of the creation of large state armies to oppose the US Army, and the states themselves are internally divided to the point where a next war would be as much of a "war within the states" as a "war between the states".   Nobody in the Texas Legislature is proposing to fund the Texas Military Department to a level where it would pose more than symbolic opposition to Federal forces.  It's more likely that violent opposition to the United States would take the form of "stochastic terrorism" (I prefer the term "freelance terrorism") - bombings and random mass shootings. Whether these could become focused enough to target Federal buildings and political gatherings seems doubtful.

But it's interesting to imagine what might happen if the US went into a collapse as deep as the Great Depression of the 1920s, that somehow became permanent.

The global impact of US collapse would span five realms: general economic activity, social and cultural activity, geopolitics, technological development, and environmental stability.

The loss of the US as an economic force would severely but not seriously damage the global economy. The Dollar would lose its role as the world's reserve currency, and this would have a tremendous impact. The World Bank, the Euro, and the Chinese Renminbi are waiting to take over if the situation becomes intolerable, though.

Global culture would not be significantly affected. High culture of symphonic music, fine art, and fashion has always been ruled by Europe, and would stay that way. 

Geopolitically, the long-predicted end of the Pax Americana would finally be realized, though the Great Game of pre-WWI colonialism is gone forever, never to return.  The Mideast would continue to be the same mess of intra-Islamic jihadism that it's been since the end of the Ottoman Empire.  China's dominance in the Far East would finally be unquestionable.

Attacks on Taiwan would lead to a major technological setback, since the most powerful semiconductors are made there by TSMC. Software to use the computational power of those semiconductor devices might lose its creative momentum that originates in Silicon Valley, The tech giants are fully globalized and can easily migrate transactions and data from their already fortified datacenters to ones in less unstable areas.

Advanced electric power technology would easily be able to fill in the gap caused by the loss of the US.

When it comes to transportation, the US is no longer the uncontested leader in technology, but only a participant in a close race. The US is losing its lead in aerospace technology.  The US is not even in the running for the lead in advanced railroad technology. Automobile and truck technology has long been a global competition, and the loss of US auto manufacturing would wound employment in Mexico and Canada, but not significantly elsewhere.

The environment continues to be destroyed at a rate exceeding its restoration regardless of the details of civilizational conflicts, although there are macrotrends that act to slow the rate of destruction. 

As long as the High Income countries (aside from the chaos-plagued US) continue to produce pollution-reducing solutions, as Low and Middle Income Countries graduate into the upper tier (and assuming that the World Bank and OECD don't move the dividing lines) their improving governance and economic incentives will lead them to reduce their emissions as well.

As we sum up the effects of US chaos in the five realms of global civilization beyond climate, it appears that short of a global thermonuclear war, the chief threats are related to reduction of silicon and lithium processing capability for computers, photovoltaic power sources and batteries.  These capabilities are concentrated in the Western Pacific, and it's essential that the rest of the world build up resiliency against disruptions there.

As long as environmental and climate deterioration can be reversed, the worst that might happen would be a reversion to the American lifestyle that was pervasive in the 1970s, before everyone had PCs and smartphones. With Total Electric Homes and electric cars in garages, this could be quite tolerable.

Tuesday, February 20, 2024

Seven simple fixes for US politics

Simple, though totally not at all easy.  But in today's sound-bite environment, simple is a requirement. Half of these could be implanted by individual states without the super high threshold required for Constitutional amendments.

  1. Ranked choice, instant runoff voting. Reduces partisanship (parties hate this) and saves money.
  2. Single, open primaries. Runoff first, with a 2-candidate election from the finalists. An alternative or supplement to preference voting that further enhances voter choice. Parties hate this even more.
  3. Rule-based redistricting.  "Non-partisan commission, appointed by politicians" is an oxymoron.
  4. Population-weighted Senate composition, with a two-Senator baseline, and all seats elected "at large" statewide.  One person, one vote, not one state, two votes, yet preserves a Congress with two distinct Houses with differing perspectives. Fixes the inequities of the Electoral College for free.
  5. Term limits for all Federal elected offices. If it's good enough for the President, it's good enough for Congress and the Supreme Court. Even for the Supreme Court, "Serving during good behavior" notwithstanding, if individual retirement is allowed, then mandatory retirement is obviously also allowed. Mandatory retirement at the age of Social Security would be a bonus.
  6. Rotating membership in the Supreme Court. Keep the nine justices, but every election cycle retire the senior justice and install a new justice from all the justices of the Appellate Courts, selected at random from those who have yet served or from nine who have served least recently. If the Senate fails to confirm a nominee, a new nominee is selected from the Appellate Justices as before.
  7. Mandatory National Guard service. "A well regulated militia, being necessary to the security of a free state," requires that every able-bodied person who possesses a gun be properly trained and organized. This in no way impairs the right to keep arms, and enhances citizens' ability to effectively bear arms. Organizing refresher tours of service by random selection, just like jury duty, should not be excessively burdensome. Every new purchase of a weapon comes with free state-provided training. Free weapons are already provided to volunteer Militiamen; they should be allowed to keep de-automated ones when their tour ends.
The current American political system is not massively broken, but some fundamental defects that weren't intolerable in past eras have been exploited into severe problems. A bit of tuning is in order, and should make it substantially more robust.

Monday, January 01, 2024

Almost as good as free will

Stanford professor Robert Sapolsky has concluded that free will doesn't exist. I mostly agree.

Neurobiologists like Sapolsky, psychologists, and even computer scientists have realized that the brain has multiple components that independently make decisions in different domains, a point which seems to have eluded philosophers for generations.  Sapolsky's point about our inability to "choose what to choose" takes that dissociation far beyond most philosophers' thinking.

Notably missing from discussions about Sapolsky's ideas are the physicist's perspective.  The brain is a material object subject to the laws of quantum mechanics, which most physicists have realized is fully deterministic, following the Schrodinger and Dirac equations with incomprehensible complexity. In order to preserve free will in quantum theory, some creative physicists have concluded that "electrons have free will".

Yet even without absolute free will, our independence from the environment and other people that allows us to think and act on our own as individuals provides for an autonomous will, which should be good enough for practical and legal purposes.

Unrelated: Happy 2024!


Tuesday, December 05, 2023

The Byzantine Generals Problem also applies to politics with lies and misinformation

The classic work on the Byzantine Generals problem, arose in the context of fault-tolerant computing.  The Wikipedia entry on the topic is titled Byzantine Fault.   Thinking about the problem for reasons that I can't recall, I recently realized that it can apply to political systems infested with lies and misinformation. Studies of this aspect are hard to find, if they exist at all.  

Leslie Lamport's 1982 paper is concerned strictly with systems that use only point-to-point communications, rather than political situations where miscommunications are broadcast to audiences of various sizes. Its successors are (almost?) exclusively about improvements to the amount of messages needed to be sent to prevent any faults at all from being concluded.  The remainder are concerned with the consensus mechanisms for cybercurrencies, and rarely go into any mathematical depth about the consensus formation problem itself.  I expected to find discussions of this in the economics or political science literature, but my web search skills, such as they are, didn't uncover any.  Maybe their vocabulary is totally disjoint from the computer science vocabulary?

What political scientists should want to know are things like how the probability of a false consensus varies as the probabilities of any particular general generating a lie, and the number of variably lying generals changes.  If everyone lies, but nobody lies very often, how much worse or better is that then a situation where some generals lie all the time?

The least bad news is that autocracies can be consistently subverted if at least 1/3 of the "lieutenants" fail to follow the generalissimo. The Achilles heel of all the variations seems to be vote-counting systems. Open voting, like legislative roll call votes, appears to be most robust to miscounts. Open counting of secret ballots can also work. It's why vote-counting machines must be fully open source.


Wednesday, November 22, 2023

Prometheus Unbound - the future of AGI after the OpenAI board upset

For tech spectators and AI participants it's been an exciting weekend.  The dust has not fully settled, but it appears that the most influential AI company will end up keeping its original CEO, but with a new board of directors.  All pretense of being seriously not-really-for-profit and concerned with "AI safety" (whatever that is) is now gone. It may be too soon to understand the detailed ramifications of these changes; some reports have the goal of the latest version of OpenAI's board as tripling its size and reworking yet again its organizational structure to give Sam Altman de jure control in addition to his demonstrated de facto control, with Microsoft playing a more official role this go-round.  Time will tell.

People come and go, but the industry landscape is really determined by the insatiable demand of AI for compute cycles.  To understand AI power relations, "follow the money" turns into "follow the chips".  Who's got the chips now?  Nvidia and Microsoft have introduced a new generation of them. Nvidia's H200 displaces the H100, which now becomes last year's ancient history.  Microsoft introduced its Maia 100 chip a week ago. Google has had its TensorFlow chips for years, as has Amazon AWS with its Inferentia and Trainium chips.  Nvidia powers the vast majority of other AI engines, including those from OpenAI.

If you look at it from the chips and datacenters perspective, the future becomes easy to see.  Instead of a single AGI ruling the universe, we will have a handful of titanic AGI's ostensibly ruling from Silicon Valley instead of the Greek Mount Othrys, although their datacenters are really dispersed worldwide. Unlike the ancient gods, these artificial gods will be under the at least nominal control of their respective corporate masters.

The outcome of the "AI alignment" debate is also now clear. AGI development will be aligned not with "humanity" but with capitalism.  Many people will become wealthy as a result, and a few people will become unimaginably wealthy.  The future of humanity under capitalism has been uncertain for 150 years now, the advent of AGI doesn't really change this.

Thursday, October 19, 2023

Post-modern origin of species

In the 1860s we had Charles Darwin's ideas.  Then in the 1940s we had a "modern synthesis" of genetics and population biology.  And in the 1970s and 1980s this was tied to molecular genetics. Now we have attempts to describe speciation in even more fundamental terms.  Forty years later -- t's about time.

Recently two papers have appeared in Nature and PNAS that attempt to show how to identify when natural selection is occurring in an abstract sense that could help to understand how living systems arise from non-biological systems - abiogenesis. They're not quite successful.  In fact, they're so abstract that people are having trouble figuring out even what these theories are trying to do.

An article in Ars Technica is an example of this confusion.  From the perspective of "publish or perish", this is a good thing, since it means that there are plenty of opportunities for easy papers explaining and correcting.

From my perspective, the problem is that they're phenomenological, showing how to recognize species, but not explaining why distinct species should even exist, or how they come about.  They both assume that species come about via natural selection, although there are other mechanisms that can preferentially increase the population of some kinds of objects within a broader spectrum of varieties.  When objects are created and destroyed via some inaccurate replication process, some varieties will take less energy to create, and some will last longer once they're created.  These are called "thermodynamically preferred" varieties, and the laws of non-equilibrium thermodynamics (which are mathematical laws, not physical ones) will determine how fast the populations of these varieties grow and decline.

Then in chemical systems, you'll find that at some varieties have catalytic properties that amplify the rates of creation of other varieties, and, rarely, autocatalytic properties that amplify the rates of creation of themselves. The autocatalytic property may be distributed across a loop or network of reactions in a hypercycle.  Neither paper provides a way to recognize or measure the existence or power of the autocatalytic advantage, although the PNAS paper would ascribe a "function" to it, once it's recognized.

That paper tries to focus on "function", but the focus doesn't really achieve the needed sharpness, because the word is ambiguous.  Human brains are hardwired to see goal-oriented phenomena in as many places as possible. But for most of the history of life, goals didn't exist, and things happened because they followed inevitably from the way they were in the past, rather than happening in order to change the future by approaching an internally represented target state.  In attempting to create an objective definition for function, the paper almost escapes this teleological trap, but you can tell from the uses of the term in other places that the authors hearts haven't really accepted the concept.  Many of the comments to the Ars Technica story attribute this to a conflict of interest with quasi-religious goals of the foundation that funded much of the authors' work.

That's too bad.  The slogan "It goes by itself" needs to become as much of everyone's way of thinking as Galileo's "Nevertheless, it moves" did 400 years ago.


Saturday, August 19, 2023

Improving assessment of authentication via some formalization: Preliminary considerations

Authentication used to be easy: collect a username and password, and check the password.  Now it's so complicated that it takes hundreds of pages to specify how it works, and you have to be a talented professional to know if something built to the specification is trustworthy.  

And the requirements for authentication have grown equally large and complex -- a single identity spans multiple implementations, with delegated identities, so authentication is often performed by a different organization than initial registration of an identity, and probably with different policies that need to be coordinated. 

It's no longer possible for a single person to have the privileges and resources to learn and comprehend all the implementations used by a single identity. This means that even if you're a specialist in authentication systems, you can't be sure that the authentication framework that's used by the people that you're responsible for actually fulfills its requirements.  If you're an ordinary user, you can only trust that the social-economic effect of millions of other users like you has enough of a cumulative effect towards trustworthiness that the system is reliably usable.

Perfect trustworthiness is impossible. It's not even possible to clearly and consistently judge how close to perfection we actually attain with real-life systems.  But we can make it easier to understand how it all works and to analyze where the weak points are. Formal methods are the standard recommendation as the way to assure consistency in designs: they replace ambiguous verbal descriptions by strictly defined notations.  But if the problem is complexity, the formal descriptions must be just as complex as the verbal descriptions, and they will make the unattainable demands on the mental capabilities of the security specialists who are trying to use them even greater, since they define yet another language that must be learned and understood, in addition to the natural English of the informal descriptions.

We need tools that will ease the burden of validation of authentication systems by automating the consistency checks themselves. And we need those tools to be usable without imposing their own intolerable complexity demands on their users.

We could start a search for such tools by looking at automated proof assistants, like Coq and Lean.  These turn out to be written for mathematicians, not practicing developers who need to prove the correctness of real-world software, much less application specialists like security analysts.  Maybe we could use languages based on principles learned from proof assistants, such as dependent types.  But no, these are still mostly research projects, and the most promising of them, Agda and Idris, aren't under active development any more, and the dependent type language developed in The Little Typer is a toy language not intended to be used seriously.

Making a long story short, we could look at popular functional languages like Haskell and OCaml, and reject them as being contaminated by too much syntax to learn for the value they provide in utility as modeling tools. (Figuring out what functional languages are good for, if anything, is a continuing adventure.)

In the end, we want a small set of properties in our modeling language:

  • Static typing, because we want to check the model, not execute it.
  • Classic Euler function syntax, i.e. f(x), rather than some Polish notation with too many parentheses (Lisp, typed Racket) or with no parentheses at all (Haskell, OCaml).
  • Functional capability, in order to capitalize on the amazing proof properties of the Curry-Howard-Lambek correspondence if we can, as well as all the other integrity-enhancing properties of the functional programming paradigm.
  • Minimization of the amount of transformation needed to process JSON descriptions, since we want to describe the essential properties of authentication systems as a finite-state machine, in a simple, well-known data description language like JSON.
A modeling language with these properties won't provide the ability to check for everything we want to confirm about authentication systems (like resistance to side-channel or hardware attacks, or even the standard correctness properties), but they allow us to address several of the biggest concerns: 
  • Completeness: that the descriptions don't have undocumented gaps where loopholes and backdoors can lurk, and that the descriptions themselves aren't so complicated to understand that we inadvertently skip over key parts, and miss important errors that they might contain.
  • Consistency: that all the components of the description fit together as claimed
  • Clarity: that the descriptions don't rest on ambiguities inherent in natural languages in order to achieve a false sense of consistency
  • Absence of hidden weakening: "A chain is only as strong as its weakest link." Complex systems contain many points where it is possible for weak cryptography to slip in without notice, often in the form of short or weak keys, or as obsolete, broken algorithms.
  • Key traceability, in two forms:
    • Password identifiability: all users who can create, view or change a password are known. All too frequently, there are privileged administrators who can compromise security without any evidence of their misbehavior being recorded.  This is of course a key concern for privacy maintenance of information that isn't security keys, as well.
    • Auto-generated randomness: many security algorithms are dependent on the system generating a random number that is often immediately used and discarded, but other times may be preserved for a long time, e.g. across system restarts.  It's important to know where these numbers originate from, and that they are cryptographically secure, i.e. unpredictable in the short run as well as unpredictable in the long run.
  • Secure events are securely logged: Logging of key events should be onto write-once media, or distributed onto a public blockchain that is immutably and irretrievably copied.

This gets us to subsets of either Typescript or Gleam as our quasi-formal modeling language.  We'll write about these in a future post.

Sunday, April 16, 2023

Foraging as a unifying strategy for neurobehavioral research programs

I follow a few computational and behavioral neuroscientists, and have noticed that they sometimes mention foraging in their research summaries.  I've recently been realizing the brilliance of foraging as a coordinating framework for a research program in those areas.  Foraging provides both evolutionary support and ecological validity to link lab studies with animals' situations in nature, over a vast range of capabilities.  Here's an outline of how that works:

Consider the notation "-->" to mean "provides an evolutionary base for the emergence of".  Then...

Note: the sequence below is not a strict hierarchy; related hierarchies evolve independently in parallel

  • passive foraging (e.g. corals) --> gradient following foraging (jellyfish, mosquitos)
  • gradient following foraging --> path creation foraging (ants, herbivores)
  • path creation foraging --> goal-oriented foraging
  • goal oriented foraging --> route planning
  • route planning --> global optimization of route traversal resources
  • route planning with limited resources --> "mental" route planning
  • mental route planning with limited cognitive resources --> cognitive load management
  • cognitive load management --> mental introspection
  • mental introspection --> consciousness

The perceptual aspects of foraging are multi-factorial, and their evolution is even less strictly hierarchical than foraging as a whole.  Key perceptual transitions include:

  • open field, and path network foraging --> localization of self in an "allocentric" environmental landscape
  • allocentric maps --> distinction between self and other
  • objects with complex properties --> indirect "signs" of foraging goals
  • bounded perceptual processing abilities --> attention
  • attention --> endogenous control of perceptual salience
  • endogenous attentional control --> signification overshoot
  • signification overshoot --> the "hard problem" of consciousness
If you want a career in neuroscience that provides a way to get from hardcore neurosynaptic mapping all the way to the most psychological of mental phenomena short of social interactions, picking foraging as a central topic can give you a wide open field full of tasty topics to work on.

Thinking about the evolutionary transitions driven by the basic need for energy that leads to foraging for food sources in this framework illuminates the long, complex path that it will take to fill in the gaps in the quest to understand how the operations of neurons give rise to mental phenomena.  

It's no wonder that philosophers speak of an "explanatory gap" between the brain and the mind, when there are something like nine levels of abstraction between the two.  It will take much detail work by scientists to fill in the intermediate levels before the concepts become sufficiently "common knowledge" that philosophers can comprehend them and recognize the absence of any magic supernatural connection between them, or even any scientistic faith in some unknown kind of materialist connection.  But then, replacement of thought-stopping mystery by knowledge-driven awe at the complexity of nature has always been the role of scientists.

Wednesday, March 29, 2023

Subsidizing green mines could reduce bitcoin's environmental damage

As one of the largest consumers of electricity in the world, it is important to reduce the climate impact of bitcoin mining by incentivizing mine operators to reduce their use of fossil-fueled energy.  While it is still not the best and highest use of that energy, a bitcoin mine powered by dedicated solar, wind, hydro, or natural hydrogen sources does minimal harm to the atmosphere. Because their only product is small amounts of internet traffic and waste heat, mining datacenters can even be located near their power sources, eliminating the need for expensive, hard to approve long distance transmission lines.

Bitcoin investors and users are likely to be willing to pay a small premium for bitcoins and bitcoin transactions that promote environmentally sound mining practices. Bitcoin exchanges can deliver this premium to green and gold mines via the mining pools that they use, despite the untraceability of failed hash computations.  Exchanges can enhance their brands by working with mining pools to develop certification programs that validate the environmental impacts of the bitcoin mines that they work with.  These incentives will provide bitcoin mine operators with additional motivation to use renewable energy beyond those sources' steadily increasing cost advantage.

Green, Blue, Gray, and Gold bitcoin mines

But that doesn't leave carbon footprints of bitcoin mining totally unmanageable. A way to begin is to start tracking the environmental impact of each bitcoin mine.  Identifying and documenting the details of each mine is not really feasible, but we can take a lead from the hydrogen production industry, and identify three major categories of impact.  "Green mines" are mines that are powered exclusively by renewable energy. "Blue mines" are mines that may be powered by non-renewable sources, but which take the output of those sources and effectively mitigate their impact, most likely by sequestering the carbon dioxide those generators produce.  Those datacenters in the Permian Basin that consume excess natural gas to mine bitcoin are halfway to blue bitcoin, but they need to take their CO2 exhaust and pump it back into the ground.  The could even use that CO2 in its supercritical form for enhanced recovery of oil, but it would be an accounting nightmare to try to track the secondary CO2 produced by burning that oil.  Failing to disqualify enhanced recovery uses for a "blue mining" label would create a serious loophole in a labeling program.  Mining operations that use fossil electricity without carbon capture would be called "Gray mines".  Powering a bitcoin mine with electricity from geothermal sources or with geological natural hydrogen could even be called "gold mining".

Projects like the Cambridge Bitcoin Electricity Consumption Index that report the global energy consumption of bitcoin mining don't link individual mines to their energy consumption, but instead estimate the type of equipment that mines are likely to be using, with mine locations based on anonymized, voluntary reports from mining pools. They could enhance their environmental impact assessments by incorporating government data on regional power production mixes, and such estimates can affect individual mine managers' power sourcing decisions indirectly via their mining pools.

There's another version of this note on Medium with more context.

Saturday, January 14, 2023

Pricing monoclonal antibody treatments

The FDA has approved another treatment for early-stage Alzheimer's Disease that targets the amyloid plaques that are a hallmark of its effects on the brain.  The treatment—lecanemab, brand-name Leqembi, made by pharmaceutical companies Eisai and Biogen—is an intravenous monoclonal antibody that targets amyloid-beta proteins, which accumulate in plaques in the brains of people with Alzheimer's. Researchers have not yet conclusively determined if amyloid plaques are a root cause of the disease.  There are many things that go wrong in the brains of Alzheimer's patients, and it may be wishful thinking to look for a "silver bullet" single treatment..

Like aducanumab, side effects of lecanemab can be severe, even life-threatening, and like acucanumab, Medicare will pay for the treatment only in the context of an ongoing clinical study.  As of this writing, no studies are planned, which means that only wealthy people will actually get the treatment.

Eisai and Biogen have priced lecanemab at $26,500 for a year's supply. To many people who only see drug prices for over-the-counter products, that seems like a lot of money.  But is it really?

It's a basic principle of price identification in capitalism for sellers to charge "all that the market will bear", and let competition between sellers exist in order to generate a functioning market. Patents for things like drugs prohibit the market-based pricing mechanism from existing, and all we're left with is monopolistic pricing, where the "competition" is the willingness of the buyer to do without the product. When the buyer's alternative is death, this mechanism doesn't work well.

Putting a price on a long, slow decline to death isn't easy, but that doesn't stop people. According to a press release from the company, Eisai has devised a mathematical formula to compute the lifespan quality of life alleviated by their product given its measured effectiveness, and priced it substantially below that. If you believe their formula and the results of their clinical trials, it's a good deal.

Like aducanumab/Aduhelm, lecanemab is a monoclonal antibody, so its pricing can be compared to monoclonal antibodies used as treatments for other diseases. The 42 monoclonal antibody treatments listed by pharmacy discounter GoodRx have a median price per dose of $5275, ranging from the government-set $3/dose for some Covid-19 treatments to $239,020/dose for a chemotherapy treatment.

The Alzheimer's treatment from Eisai is given 24 times a year, making its price $1104/dose. Compared to other monoclonal antibody treatments for other diseases, 80% cheaper per dose coud be considered to be pretty low priced.

Monday, December 19, 2022

Who are official directives to wear face masks protecting? Not you.

Two years into the Covid-19 pandemic, the landscape of the risk and what to do about it has changed.  But the response of public health officials and experts and random people with opinions has not.  The data on what's happening is still bad, so even the most thoughtful of expert assessment isn't as good as it really needs to be.  And the politicization of the response has created a situation where public statements have to be phrased in a way that impels people to do the right thing even if it's for the wrong reasons and supported by inappropriate facts.

The general population can't distinguish between the virus (SARS-Cov-2) and the disease (Covid-19).  The public health surveillance data for the virus is much better than the data for the disease.  This means that even experts who should know better talk about them as if they were the same.  They end up fighting the virus, not the disease.  Journalists who need a hot, grabby story are motivated to find the most severe way to write or talk about the pandemic.

Here's how to think about the situation if you want to react in a more sophisticated way than by following guidance that is oversimplified so that it can motivate the entire population, even those parts of the population that need super-simple instructions or are skeptical of or even opposed to official directives.

Basic principles

  • Reduce exposure to the virus
    • Stay away from confined areas with poor ventilation
    • If you have to go into risky areas, with lots of people who may be infectious, protect yourself - wear a good mask.
  • Reduce your susceptibility to the disease by getting vaccinated and boosted

Rules to protect yourself

  • Vaccination is better than masking
  • Boosters make vaccination even more effective
  • Any mask is better than no mask
  • Wearing a mask while being vaccinated is better than either one alone
  • Cloth or surgical masks protect the people around you more than they protect you
  • To protect yourself, wear a standard rated mask
    • There are lots of standards: N95, KN95, FFP2, KF94, and more.  It took a bit of searching to find a thorough survey of most of the important ones.
    • A rated mask with a valve protects you, but it doesn't protect people around you.  So use a mask without a valve.  If you are infectious but still feel OK (asymptomatic), don't infect others inadvertantly. Even that article misses this point.
Ventilation is an important defense against all airborne risks, but it's complicated to understand and assess, and can be expensive to improve.  It would be nice if some entrepreneur could figure out how to make good ventilation something worth buying.

Saturday, December 03, 2022

Mini-review of Alastair Reynolds' "Eversion"

Alastair Reynolds is in the top tier of my favorite authors.  Eversion is his latest novel.  It's probably best read as a mystery story, with the mystery being what novelistic form is it following.

Is it sci-fi horror, like Reynolds' own Diamond Dogs?  Is it a series of parallel, interlinked stories set in different time frames, like David Mitchell's Cloud Atlas, or Simon Ings' Dead Water?  Or is the linkage between the stories a psychic one, like Philip K. Dick's Ubik? It contains a scary, mysterious object, like H.P. Lovecraft's At the Mountains of Madness, or Iain M. Banks' Excession. And the object turns out to have a mathematical character, like many of the objects in the stories in Clifton Fadiman's collections Fantasia Mathematica and The Mathematical Magpie. The problem with the object might be related to the problem of sphere eversion.

Without giving away a significant spoiler, it turns out to be all of these, and more. Fitting all these challenges together is a high challenge, and Reynolds almost succeeds.  But Reynolds is not really a stylist, and this kind of story needs absolute mastery of style in order to make its shifts of context enjoyable.  The stylists in the list above, Mitchell and Ings, don't have the command of technology to make Eversion's other demands succeed, though. If I were more of a fan of horror, I might have found it totally satisfying. I'm hard to please.

Bookseller note: Goodreads is owned by Amazon, so take its bookseller recommendations with a grain of salt.  I try not to buy from Amazon these days, because of its anti-competitive practices.  But it's hard to find any large company these days that doesn't have an anti-competitive streak.

Wednesday, November 30, 2022

Too much planning to survive can reduce survival

It's a long, strange path that our ancestors have taken to get to our level of cognitive processing.  It's understandable, but not forgivable, that many philosophers don't bother tracking it all the way through from microbial beginning to the latest cultural edifices.

Everyone knows that evolution works by "survival of the fittest".  Which is almost a tautology since "fitness" is defined in terms of number of descendants who survive to reproduce themselves.

Less well understood is how species with complex individual members come about.  It occurs because there is always variation in complexity, and some variants acquire increased fitness by virtue of some aspects of their complexity.  Because "there is always room at the top", there's a general trend towards ecosystems hosting populations with greater complexity.

Then at some point in the evolution of greater and greater complexity, the ability of individuals to make plans can appear.  This can take a long time, but nature can take as much time as it needs; it's not on any particular schedule.

Among the things that planning can do, is make plans to survive.  An organism that can make plans intended to enhance its survival in certain situations (and then execute those plans) will have a greater likelihood of surviving those situations than an organism that just reacts to the immediate aspects of them.

However, the process of planning consumes cognitive resources and attention.  Computational and game-theoretic analyses of the planning process have shown that comprehensive planning involves a search through a space of all possible sequences of actions that grows exponentially in the size of the problem space, or equivalently in the depth of search traversed before a particular plan of action is abandoned in favor of an alternative.  The game of chess is the classic example of planning in a situation whose solution is beyond the capability of any human or computer yet built.

Cognitive resources used by planning might be more effective in enhancing survival if they were applied to reacting quickly and precisely to situations, rather than focusing on planning.  The stereotype of the "absent-minded professor" is an example of this kind of misallocation of resources.

Thus, the most effective kind of planning is resource-bounded, making  heuristic estimates rather than carrying the planning through to a conclusion.  This creates an inverted-U shaped function of the effectiveness of planning in enhancing survival vs the amount of resources applied to planning.  

Maximizing survival involves finding the sweet spot between planning and action.  Finding and maintaining planning activities at this sweet spot requires identifying and controlling the depth and comprehensiveness of planning.  Ability to exercise this kind of control provides its own survival advantages, with its own inverted-U properties.  

Higher order control of planning is one of the cognitive processes involved in consciousness.  Recognizing this provides part of the answer to the questions of the usefulness of consciousness, and to the evolutionary origin of consciousness.

Wednesday, October 26, 2022

Passkeys - a password killer at last?

 Betteridge's Law of Headlines is right again.  Nope.

Ars Technica has an enthusiastic article triggered by an announcement that a few more companies have jumped on the FIDO Alliance Passkey bandwagon.  The Ars commentariat remains skeptical.

I've not yet encountered a passkey authentication prompt in the wild, so maybe that bandwagon isn't rolling as fast as its sponsors would like you to believe.  The key thing to try to understand is who the audience for passkeys is: It's the connected person in a connected world.  If you're a person who has a phone, a smart watch, a notebook, and a desktop PC, and your house has an Amazon Echo or three, and you despair of keeping your accounts synchronized and secure, passkeys might help.

If you're a low-tech person, or a security-conscious person who doesn't trust the ability of tech giants to create and manage securely interoperable infrastructure, this is just more unwanted complexity.

For example, my Mother lives in a small town, and her bank's website doesn't support even basic SMS or voice callback two-factor authentication, because their customer base is so unsophisticated that they wouldn't tolerate the hassle.

Enormous companies like ATT are so disorganized that they can't manage a two-factor system that supports more than one phone at a time.  To think that they'll be able to do a clean, secure job of deploying passkey technology is laughable.

Yubico, who makes security tokens, has a nice chart showing how deeply dependent passkeys are on having smart devices fully connected to the cloud.  If you ever travel out of range of cell service, you're out of luck.

Keep buying the latest and greatest model of all your devices, and you be OK most of the time.  Stay in your box, and you'll be fine.


Thursday, June 23, 2022

What is it like to be yourself?

Thomas Nagel's famous essay "What is it like to be a bat?" has been impairing people's ability to think about consciousness for some 48 years.  That's a remarkable accomplishment.  It would take a far longer article than we have space for here to survey all the writing it has stimulated, but there is a bit of news to remark about.

Nagel's philosophical goal is to convince you that there are important aspects of consciousness that are beyond the reach of science.  His major method in doing this is to first, convince you that he understands consciousness better than you do.  This is hard, because as a conscious person, you have uncontrovertible knowledge of your own consciousness. But for many readers, and especially trained thinkers like philosophers, he amazingly succeeds.  There are two primary ways that he makes this happen.

First, he commits a basic rhetorical fallacy, closely related to the "appeal to authority" that is in every list of rhetorical blunders.  I've come to call his error "argument by failure of imagination" and it goes like this:

  1. I'm a smart person. (This is the appeal to authority. Nagel is a well regarded professional philosopher, and part of the job of a philosopher is to perform smartness.)
  2. I've studied this topic thoroughly.  The topic contains a problem X.
  3. In my studies, I've covered every imaginable solution to X.
  4. I've failed to find a solution. I can't imagine how X might be true.
  5. Therefore X is false.
Nagel explicitly concludes "our minds are not constituted to be able to understand the consciousness of bats".

This conclusion is in conflict with the mathematical discovery, in about 1936, independently by Alan Turing, Alonzo Church, and Emil Post, that certain classes of systems that manipulate sentences can perform any possible sequence of manipulation of sentences.  In short, that anyone who can read, write, and follow directions can think any thinkable thought.

This invalidates the jump from step 4 to step 5 in the imaginative fallacy. Just because you haven't found an answer doesn't mean it doesn't exist.  Some problems take a long time to solve, and maybe you just haven't spent enough time on it.  Or maybe you're just a stick-in-the-mud, and need to be more creative.  In my experience, most philosophers aren't nearly as creative as they think they are.

It's a remarkable fact about the diffusion of knowledge that thinkers about consciousness have not incorporated this result into their arguments for scores of years - nearly 90 years by now.

To help you think about the consciousness of other kinds of animals than your own species, Ed Yong has published a new book “An Immense World: How Animal Senses Reveal the Hidden Realms Around Us”.  It's been reviewed in the New Yorker and many other places.

If you are able to reject Nagel's unimaginability argument, Yong's book embodies a method to gradually increase your imaginative capability until you actually achieve the ability to successfully imagine what it's like to be a bat.  Maybe we don't have enough information about the details of the sensory equipment of bats, or the ways that bats' brains process sensory information, but that only means that your understanding of bat consciousness isn't totally accurate, not that there is some fundamental barrier to any understanding whatsoever.

Now, a perceptive philosopher might argue that it's one thing to know all the facts about bat perception and bat consciousness, but it's another thing to know "what it is like to be" a bat.  This snag was embodied by philosopher Frank Jackson in a thought exercise about a vision scientist named Mary, who knows everything there is to know about vision, but is color blind.  Then through a miraculous medical treatment, Mary's disability is cured, and she can now see in full color.  The question is "has Mary learned anything new?"  The conceptual failure by Jackson, and every other discussion that I've read about Mary's situation, is that "knowledge about vision" is not simply a bag of unconnected facts.  If Mary is anything like a real vision scientist, she has constructed a mental model of visual perception systems, and beyond that, she's able to mentally operate that model to provide it with simulated visual stimuli, and watch it produce simulated visual experiences.  When Mary's treatment is complete, and she compares her real experience with the simulated experience that she's been studying all those years, she learns just one thing: "Was I right?" And of course she was.

If you study bats in enough detail, and build a sufficiently accurate mental model of the bat's mind and experiences, you too can operate that model and experience what it is like to be a bat.

If you somehow believe that having a bat's experience inside your own mind is not the same in some important way as being the bat's mind without a host mind, then you have other problems.
  1. experiencing what it is like to be a bat (impossible)
  2. experiencing what it is like to be a cat (impossible)
  3. experiencing what it is like to be an ape (impossible)
  4. experiencing what it is like to be someone of the opposite sex (men and women are inescapably, mysteriously different)
  5. experiencing what it is like to be someone of the same sex (impossible. Sorry, bro.)
  6. experiencing what it is like to be your twin sibling (impossible)
  7. experiencing what it is like to be yourself (impossible)
All those self-help admonitions to "just be yourself" would turn out to be impossible tasks.  Thousands of years of writing about the virtue of self-knowledge would turn out to be aspirations that can never be achieved.

Isn't it time to throw Nagel's argument out for good, and learn to understand how empathy really works?

Thursday, April 21, 2022

Where to spend a billion dollars improving the world? Desalination tech R&D.

Updated at the end...

Conor Friedorsdorf has a columm in The Atlantic.  This week, he asked " Say you received $1 billion to spend on improving the world. How would you spend it? Why? "


My first reaction was "only a billion?"  That probably won't even get you a seat on the board of Twitter. It might take a hundred billion to buy out Mark Zuckerberg's privileged shares of Facebook.

How about making an impact on a major global public health issue?  In 2016 the Bill and Melinda Gates Foundation announced that they would be spending $4 billion through 2021 in the fight against malaria.  Their current commitment is apparently down to about $250 million per year.

For a mere billion dollars, you're going to need to spend the money on something that will have significant leverage.  For ongoing impact, the best thing would be not to spend the entire amount all at once, but to invest the billion, and harvest the income that it produces, while reserving enough to keep the principal growing at a slow rate. That income might be $100 million a year if you choose good investment managers.

Then target your gifts at subjects that will themselves provide leverage.  For $100 million a year, you can sponsor free broadband Starlink satellite internet for about 50,000 households.  That's almost exactly the number of households in the Navajo Nation counted by the 2010 census.  With an unemployment rate approaching 50% and a population that mostly resides in a desert landscape far from urbanized areas, often without electricity or running water, internet access will create a launchpad to education and job opportunities that are inaccessible today.

But improving the lives of a hundred thousand people or so is far from "improving the world".  We need even more leverage. This means investing in technology to improve the world, in order to lower its cost and make it available to people whose lifestyles are not yet improved to the levels that the rest of us enjoy.

The basics are, as usual:

0. Stable governments and economic systems.  It's hard to see how these can be achieved by spending money.
1. Clean water
2. Electricity
3. Communications technology. Over the long term, education enabled by widespread internet access can lead to improved government, until advanced social media algorithms enhance factionalism and social unrest.

The adjectives "affordable", and "universal" go along with each of these.  According to the World Bank, there are nearly 700 million people with incomes below $1.90 per day.  That $1 billion could give each of them a one-time gift of $1.42.  The huge number of radically poor people creates a serious challenge to any attempts to lower costs on the basics to the "affordable" level for the goal of "improving the world".  But if we don't work on it, we'll surely never achieve it.

Electricity is making great progress towards becoming "too cheap to meter".  If you put solar panels on the roof of your house, you've already got that unmetered power. Your panels feed into your house wiring on your side of the electric meter, and their power doesn't get metered unless your panels overproduce and you sell your excess back to the grid.  Global investment in solar and other renewable power sources is huge, and it's very difficult to find an aspect of it where an additional billion dollar investment would make a significant impact.

So consider access to clean water  While great progress has been made in improving this situation, in 2020 nearly 500 million people still did not have access to safe drinking water, according to WHO/UNICEF Joint Monitoring Programme (JMP) for Water Supply and Sanitation. In many places, installing a simple pumped well or capturing rainwater can solve their problem.  

However, water in desert and near-desert areas is not so easy to obtain. There are many coastal areas where the demand for water exceeds the supply, such as Southern California or the Middle East, and large-scale desalination systems are already in operation.  Even far from coastal areas where fresh well water is absent or depleted, there are often brackish aquifers that are unsuitable for drinking as they come out of the ground, but can be made drinkable with minor desalination.  Desalination of groundwater is already affordable in places as far inland as El Paso, Texas.  With cheap solar electricity abundant in sunny desert climates, the energy efficiency of desalination is not so much of a problem.

If there's no groundwater or rainfall available at all, it's tempting to look to air capture of the water vapor in the atmosphere.  In a few places in the world, it never actually rains, but local weather conditions produce foggy days with high humidity where condensation equipment might be already effective.  But in the deep desert with relative humidity in single digit percentages, it's necessary to process a lot of air in order to obtain a few liters per day of drinkable water.  With cheap solar panels, this might be affordable.

I would spend my hundreds of millions of income from that billion dollars in funding R&D towards lowering the cost of water processing.  The demand is insatiable and warming climates are making drought a new normal state.
 
Update
 
The Atlantic published Friedersdorf's summary of the comments he received. 

Desalination and water treatment was the solution offered by a couple of other writers as well.  Of the other suggestions, the ones that I liked best were concerned with strengthening the bottom of the food chain, by encouraging agricultural practices that enhance the soil, and by simply buying up tropical rainforests, removing them from the threat of clearcutting by exploiting capitalism against its more short-sighted impulses.  The latter is of course, exactly what the tropical programs of The Nature Conservancy do.

I thought that the suggestion to make profiting in any way from untrue speech unlawful, just like making false claims about material products is already illegal, was interesting.  It might be difficult to phrase such laws to prevent criminalization of fiction writing or even telling kids about Santa Claus, though.

Sunday, March 06, 2022

The long game in Ukraine

In the excitement of the start of an invasion, there's not much discussion of how this situation might play out in the long term.  Here are some thoughts.

In a modern war, the losses for every side exceed their gains - there are no longer true winners. “Winning” means losing less than the other guy.

Economic losses for Russia are larger than economic losses for Ukraine, simply because Russia is larger. Russia has lost already, even if they don’t know it.

But Putin’s eyes are on empire, not on his peoples’ welfare. An empire of destitute serfs is still an empire.

Militarily, the fall of Kyiv and replacement of its government by a puppet “Belarus South” would be satisfactory to Putin. Nothing less is likely to suffice.

Even if he has to back out this time, he's going to keep trying one way or another.

It looks to me like there are three strategies that lead to continued Ukrainian independence.

1. Regime change in the Kremlin. This is unlikely. Even without Putin as leader, a new Putin-wannabe is likely to take his place. Russia’s repressive kleptocratic bureaucracy will take generations to replace, regardless of who leads it.

2. Protracted insurgency: Ukraine becomes a European Afghanistan. The Afghanis beat back the Russians in the 1980s and the Americans in the 2000s. The Ukrainians appear to have the spirit to do the same.

History blog A Collection of Unmitigated Pedantry has an extensive review.

3. Logistical interdiction. Napoleon famously observed that “an army travels on its stomach.” Houstonians have experience with being stuck in a 40-mile long traffic jam. Now imagine one in wintertime!

As long as the west can keep Ukrainian troops supplied with ammo and anti-tank and anti-aircraft missiles, and keep Ukrainian planes in the air, targeting fuel trucks can keep Russian troops trapped in their vehicles, waiting to be taken prisoner when they run out of supplies.

Experts remain baffled about why Ukraine is not attacking that famous 40-mile long convoy on its way to Kyiv.  Maybe they know that the convoy has stalled out on its own, and they're deploying their scarce resources elsewhere.

Alex Vershinin’s analysis at War on the Rocks last November has details on Russia's logistical doctrine.

If we see Russian forward bases being built and roads being kept open for supplies, this strategy will have failed. But so far, the kind of construction that the US performed at Baghram Air Base in Afghanistan is not being reported in Ukraine.


Saturday, February 05, 2022

Money and Payments: The U.S. Dollar in the Age of Digital Transformation

The US Federal Reserve has released its long-awaited study of a digital dollar, exploring the pros and cons of the much-debated issue and soliciting public comment.

 Here's what I think, in the form of my comments on the 22 questions that paper requests feedback on.

CBDC Benefits, Risks, and Policy Considerations

 
 
The reasoning behind an intermediated CBDC is very unclear.  Why shouldn't the CBDC be fully disintermediated?  That is, individuals and institutions could deal directly with a new division of the Federal Reserve.

 
A CDBC would be likely to have a minor effect of increasing financial inclusion, unless it is implemented as an automatic service (with an opt-out feature) associated with registration for Social Security, tax refund, or other process involving government payments.
 
 
It would decrease the time-to-effect of monetary policy decisions, if the Federal Reserve would choose to act directly on CBDC accounts rather than adjust banking policies.  This would increase the bandwidth of fluctuations in monetary indicators, permitting volatility that is not measurable by existing means, and introducing new avenues for uncontrolled arbitrage, thereby reducing stability.
 
 
 It could drive certain financial intermediaries, such as Plaid, Inc. out of business, by providing a low-friction way to create new linkages between accounts in differing financial institutions.

It could make leveraged and weakly or non-collateralized stablecoins even more obviously untethered to their supposed underlying currency than they already are.
 
 
The greatest advantages of a CBDC over conventional payment systems are its reduced risk and reduced costs relative to for-profit institutions, and relative to non-profit institutions due to its scale.

Reduced risk is intrinsic to the existence of a CBDC; it cannot be fully compensated for, but the development of insurance programs for loss and transaction failure beyond existing programs such as FDIC could make progress towards equalizing this advantage.

Transaction fees on CBDC activity would mitigate the threat to existing financial institutions due to its reduced cost. These fees would act as a source of income for the Federal Reserve, and help make CBDC operations self-funding.
 
 
 It is vitally important to preserve a means for users without smartphones or mobile computers to interact with the payment system. A single-purpose CBDC device available at no cost could fulfil this need.
 
 
In-person payments are in the process of evolving to become fully frictionless via biometric means such as face recognition, although serious security issues remain unresolved.  Wearable or implantable RFID authentication may be an important component of this.

Cross-border payments are being disrupted by stablecoins and other internet-based transfers; their future success remains unclear due to varying regulatory regimes in different nations.
 
 
 Any Federal Reserve CBDC must always be the most trusted, stable, and authoritative of any nation's CBDC.
 
 
Transparency of operations and open designs and design processes are a key method for reducing the risk of long-lasting defects in complex systems.
 
Transparency of linkage of CBDC deposits to other forms of dollars was not dicussed.  Processes and stakeholders for arriving at a technical design of a CBDC were not discussed.
 
 
 Privacy and traceability are intrinsically at odds.  As we have seen innumerable times, no software system can be guaranteed secure.  Privacy will inevitably be breached, if not by direct criminal hacking, then by insiders abusing authorized access.  Policies for authorized breaches of privacy should be documented in detail.
 
US HIPAA and EU GDPR regulations are a starting point but not sufficient.  It should be required that account holders be able to obtain the transitive closure of all transfers of PII between processors of CBDC funds from a small number of sources, yet not so small that documenting privacy boundaries itself leads to erosion of privacy.  This is as difficult problem.The Federal Reserve should create a program funding academic research in this topic as part of its studies of CBDC designs.  This program should include organizational design within its scope.
 
 
It should go without saying that the design and implementation of a CBDC must follow best practices in system security engineering and system development lifecycle (SDLC) processes. NIST Special Publication 800-160 may be a useful guide in designing these development processes.
In particular, all software and system designs for a CBDC must be open source, with well-funded "bug bounties", including development and operational tools such as compilers and load managers.  CBDC operations must have world-class system support and software distribution capabilities, including update QA and delivery.

 
 If CBDC deposits are assets of the US Federal Reserve, denominated in dollars, they must be legal tender as comprehensively as physical dollars.

CBDC Design

 
 Interest on CBDC accounts could be a valuable tool for managing the money supply. It can also make CBDC deposits resilient to inflation.
 
 
 The Federal Reserve and Congress must determine the degree to which they wish to indemnify users against losses due to user errors, crimes, or errors by the system itself.  This amount must be less than an amount that would cause significant impact to the national and global financial system.
 
 
 I fail to see a real advantage to intermediating a CBDC, unless that intermediation consists of the creation of a new quasi-governmental agency to administer CBDC accounts.  While it provides a way to jump start CBDC administration, intermediation by existing institutions such as banks simply provides them with a new profit center with a lower risk profile, while adding little .
 
 
 Global and even nationwide cell phone and internet connectivity cannot be assumed, thus an offline mode is necessary for a national CBDC. Because distributed database integrity protocols such as two-phase commit are complex, CBDC accounts should need to implement only a simple "offline mode" protocol.
 
A simple offline mode would lock out online transactions and permit only a single device to implement offline transactions, with withdrawals limited to the account balance.  Exiting offline mode would replay the accumulated transactions into the online system as a batch process.
 
 
 
System to system access is typically authenticated via "API Keys" that are functionally equivalent to passwords.  While passwords have come to be regarded as inadequately secure, and a number of secure alternatives are in wide use, API keys persist due to lack of well-known alternatives.  The Federal Reserve may need to commission an agency such as NIST to organize the development and standardization of a secure system-to-system authentication protocol based on public key technology, incorporating standards for secure key management. 

 
 Advances in quantum computing and cryptography could pose a serious threat to the confidentiality and integrity of CBDC transactions, not to mention all other financial transactions. The ability of state actors to access quantum and quantum-like computations significantly in advance of those publicly disclosed, and to use them to disrupt CBDC and other financial processes should not be discounted.
 
CDBC encryption and authentication algorithms and protocols should incorporate cryptographic agility properties in order to support timely implementation of cryptographic security advances as they become available.
 
 
 In order to be useful to all citizens, CBDC applications and systems should be designed from the outset with accessibility and usability in mind. US coins and bills are notoriously difficult to distinguish, yet design improvements are nearly impossible to implement.  CBDC systems have an opportunity to avoid such errors. To advance financial inclusion, CBDC functions should be easy to learn by people inexperienced with financial systems and terminology and even with their supporting technologies such as phone apps and web pages.

To provide support for advanced uses, every CBDC operation should have a counterpart API, adhering to well-documented standards.

Monday, November 08, 2021

Automated social engineering of SMS enhanced authentication

There’s a journalist’s slogan about the threshold for a reportable story that goes something like “Once is an accident, twice is a coincidence, three times is a trend.”  Looks like bots that capture one-time-passcodes sent via SMS without human interaction have passed that level, as a report from Motherboard details.

The NIST has been deprecating SMS as a factor in multi-factor authentication since 2017, because SMS-based OTPs are vulnerable to man-in-the-middle attacks. These attacks are now automated.  Once the attacker has gotten your username, password, and phone number as part of a cache of breached millions that they’ve acquired, they’ll do a partial login to your account on a target system, then the authentic system will send you your SMS code, and the attacker’s bot will phone you and pretend to be the target’s security organization and ask you to confirm the SMS code via your phone keypad.

I found out about this via a finance blog that I follow.  When a security vulnerability has reached a level of visibility that financial pundits are writing about it, it’s time for financial systems to convert to a more secure method.  The report from Intel471 offers some suggestions

More robust forms of 2FA — including Time-Based One Time Password (TOTP) codes from authentication apps, push-notification-based codes, or a FIDO security key — provide a greater degree of security than SMS or phone-call-based options.

Unfortunately, sometimes even simple solutions like physical tokens are unfeasible.  The small town bank that my mother uses has a customer population that is so unsophisticated that they are unable to handle any kind of online authentication more sophisticated than a password, and Mom has trouble remembering hers.


Saturday, September 18, 2021

What's wrong with Haskell?


Whenever discussions of programming turn to functional programming languages, Haskell is invariably brought up as the best example of its kind, or at least the most well-known.  It's eclipsed Lisp, which is a good thing, I suppose, but Haskell has such severe problems in dealing with the real world, it would be an improvement if this were funny.

Maybe that's its most fundamental problem, that functional programming is not for dealing with the real world, it's for dealing with mathematical things, like functions.  When you try to make it deal with the real world, you have to add a bunch of ugly things to the language to get it to cope adequately.

Explaining why this is so could get really long, but you deserve to know what Haskell's top problems are without having to wait for me to write a long essay and then for you to have to read it.  Here, then, are my top peeves with Haskell.

  1. Lazy evaluation of function arguments.  I understand that strict evaluation causes problems when functions are given arguments that don't terminate or have other problems and those arguments are required to be evaluated because they come first, even though they are thrown away at later points in the function that they are given to.  This means that every language that tries to have strict evaluation really needs to have at least one quasi-function that disobeys this rule and does lazy evaluation.  This is usually the "if" function, or a "case" or "cond" function that uses shortcut evaluation and returns as soon as one of its arguments succeeds, before evaluating all them. 
    But lazy evaluation makes it difficult to create a mental model of what a complicated series of function applications ends up doing, because you have to understand the sequence of applications all the way to the end before you can be sure that you've accounted for the effects of all its arguments.  This leads to excessive demands on mental working memory, which leads to programming errors.  Functional programming advocates argue that this isn't a problem, because modeling execution is bad practice; you're supposed to prove properties of the program instead by reasoning about them.
    Lazy evaluation also makes some cool things possible, like potentially infinite data structures that don't cause problems because functions on them always return before running out of data.  In reality, I've only seen one use of infinite data, and that was in Conal Elliott and Paul Hudak's functional reactive programming idea.  FRP used an infinite list of input events instead of an infinite series of "read" function calls inside an IO monad.
  2. Monadic IO. Using the category theoretic concept of a monad to force sequential execution of functions into a lazy evaluation framework where argument application doesn't need to occur in the textual order presented in the program code is a brilliant solution to a hard problem, but it's the wrong answer. Monadic IO is presented as a way to textually encapsulate the side effects of interacting with a stateful world outside of a program, giving freedom from side effects to the rest of the program, but in far too many instances the entire program is wrapped in an IO monad, leaving the stateful side effects just as pervasive as they ever were.
    An alternative design direction would be to think more deeply about the notion of "referential transparency" and build control over referentiality into the language.  Instead of a simple concept like "functions with the same arguments always give the same result" and calling violations of this rule "side effects" to be avoided, go back to W.V.O. Quine's short discussion in the book Word and Object, and understand what's going on as the names "Cicero" and "Tully" are both different and the same as they refer to that guy who lived in Rome 1,900 years ago.
    Control of referential depth exists in languages like C that provide pointers, but as far as I know pointers have never been successfully implemented in a language with strong typing that can prevent "attempt to dereference null pointer" errors at compile time.  Most languages that attempt to deal with this problem do it by eliminating pointers entirely, and create special cases for situations like strings where references are the natural way to deal with the type.  In lisp you have the 'quote and 'unquote forms that partially deal with managing referential depth in a general way, but those work because lisp is dynamically typed.
  3. Strong typing that's not quite powerful enough. Haskell was one of the first functional languages to incorporate strong typing that was powerful enough to be useful, for certain senses of "useful", and that was a significant achievement.  But Haskell's type system is first order, and that causes problems.  It means that flat data structures like vectors and arrays either have to be faked as lists of lists, or restricted to be of unique size.  You can define something like a int_vector_len10, but there's no way to express a type int_vector(size) where the size is defined at compile time.
    Functional languages with type systems that support such parametrized types exist, and their type objects are called "dependent types".  Unfortunately the people who design dependently typed languages are mostly interested in tools for doing mathematics such as proof assistants, not in tools for doing more general things, so their support communities are small and specialized, and they don't evolve the libraries and tooling needed for broad use.  We can only wait and hope that dependent typing technology becomes well known enough that it will migrate into more general purpose functional languages.  Maybe even a future generation of Haskell, who knows?
  4. Pervasive Currying.  Currying allows you to convert functions with multiple arguments into functions with one argument, which is a function of the other arguments.  It allows you to get rid of all those parentheses which are so annoying to people who are just learning lisp.  This is cool, but bad.  Absence of parentheses obscures program structure by requiring a programmer reading an unfamiliar piece of code to look up and keep in mind the type definition of every function that's in use, in order to determine the abstract structure of the code. This introduces a memory burden on the reader that slows understanding and permits misunderstandings of what's going on. In a large program that uses library functions such as those in Haskell's standard prelude, it's a serious problem.
    With parentheses or other delimiters, you can discover the code structure without having to understand the specifics of every element.  Authors of Haskell programs are subconsciously aware of this problem, and use line breaks and indentation to suggest the structure.  But these hints are not required by the complier, and cannot be trusted.

Maybe the right way to think about functional programming, especially with typed functional languages, is that it's a declarative framework for thinking about and expressing provable facts about complex functional expressions.  This is an important goal for programming and creating tools that facilitate it is an admirable endeavor. The fact that those expressions can be executed and produce results is almost an irrelevant afterthought.

Declarative programming entails a radically different mindset from programming in a style that reflects the hardware structure of computers.  Many smart people believe that its better to think in this style than in a more conventional style.  But maybe you thought that computers could be used to interact with the real world, and maybe even model its behavior?  What kind of a wierdo are you?

 

Wednesday, September 01, 2021

Is obesity research too hard?

The M3 theory of weight regulation, or, no silver bullet for weight loss.

Sometimes ideas that have been floating around in your mind suddenly fall together.  This seems to be one of those cases.  The trigger might have been a recent report by Herman Pontzer and 83 others, who studied 6421 people and found that metabolism peaks at ages 2-5, plateaus during adulthood, and then slowly declines after about age 60.

What makes a model too complex?

M3 stands for "multifactor, multimodal, metabolic".  Weight management is under the control of a multitude of different factors; it's a complex system that can be purturbed in a large number of ways, and the elements of the system are linked by an equally large number of feedback control factors that make predicting the magnitude of the ultimate effect of any single purturbation is very complex.

It's obvious that complex systems like this cannot be communicated in the simple concepts and language that popular journalists are obligated to use if they intend to engage successfully with a large audience. It's less obvious, but still likely, that public health officials, who wish to issue guidance that can be followed by most people, but who attempt to base their guidance on the best scientific knowledge, cannot effectively synthesize that knowledge in consumable form, even if they had an adequate scientific model.

Today's insight is that an adequate scientific model may be impossible to obtain, because the technology currently used to manage scientific knowledge isn't up to the job.  Scientific knowledge advances one publication at a time.  But it's cumulative only in the way that termite mound or beehive is the result of myriads of individual contributions from individual termites or bees.  The mound does not respond to external stimuli on the same time scale as the stimuli themselves.  When new scientific knowledge about a complex model appears, it does not update the model until a new textbook is written that includes the new or revised item.  Figures and equations in textbooks are not executable models, they're representations that allow readers to build executable models in their heads or in computers.  

And it takes executing the model to determine whether any proposed intervention will result in a desired outcome.  If the model is too complex to be operated in your own head or on your own computer, it's not a useful model for managing the system, even though it may be true.  The best that can be done is to measure how accurately the usable models work at each timescale, and to track how they improve as more compute power is applied to them, and whether they're improving over the years.  Tracking improvements in forecasting accuracy is done in meteorology, and practically nowhere else.

The simple model

Every time a discussion of a new "breakthrough" in weight management is announced, someone inevitably pipes up with "it's easy: calories in minus calories out.  It just takes will power, you lazy wimps."  Not only is this insulting, but it's wrong.  Calories are a measure of energy, they don't convert to mass except in high energy particle accelerators.  The weight management fundamentalists should be talking about grams of carbon, not calories.  And they need to talk about rate of carbon in minus rate of carbon out.

The same bathtub analogy that is used for climate warming works for weight management.  Suppose we have a bathtub with a drain that can't be shut off completely, but can be opened up to allow more flow beyond that basic level.  That basic flow represents the body's base metabolism used for simply keeping you alive, and the rest of the flow is what's consumed by other daily activities.  The bathtub also has a faucet whose flow represents the contents of the food that is eaten every day.  We want to regulate the amount of water in the tub, so that it doesn't overflow or get so heavy that it falls through the floor.  

This model is already difficult to manage, since we can easily measure only the weight of the tub and the rate of flow into it.

Calories are a very imperfect measure, since they are an imperfect proxy for carbon. Calories are measured by burning a substance and measuring the amount of heat produced in excess of the amount of heat needed to ignite it.  Since food is made of carbohydrates and proteins that contain both carbon and hydrogen, some of that heat comes from burning the hydrogen, and the amount from carbon that we're interested in must be estimated from a chemical analysis of the food.  And because some of the measured calories come from sources that are not well digested, such as fiber, calorie measurements overestimate the amount of carbon that becomes bodily tissues to contribute to obesity even more.

The M3 model components

If it's too complex for all of science to deal with, it's too complex to describe in detail here.  We can just give a top level outline of its key components.  We can't even list all the linkages between them.  All the components and their linkages form a graph structure, and computer modeling systems that convert graph structure descriptions into executed model runs don't exist, as far as I know.  So anyway, here's a short list:

  • Inputs
    • Carbs
      • glycemic index
    • protein
    • fiber
  • Input controls
    • Appetite - external sensory
      • Mouth feel
        • crispy
        • crunchy
        • chewy
        • temperature
      • Flavor
        • salty
        • sweet
        • savory (umami, MSG)
        • smell - hundreds of qualities
      • associative learning
        • appearance
        • smell
    • Hunger - internal sensory
      • blood sugar
      • stomach fullness
  • internal processing
    • digestion efficiency
      • microbiome spectrum
    • glucose production
    • glucose consumption
    • insulin-controlled conversion rate
    •  tisue targets
      • white fat
      • brown fat
      • muscle
  • Outputs
    • breathing - CO2
      • base rate
      • exercising rate
      • exercise level
    • excretion

A better list would attach to each item and link a citation to the scientific literature that provides evidence for its properties.

 New frontiers in weight control

The silver bullet would be to identify a single item in the list above that is both measurable and controllable, so that you could manage your weight by controlling that item. But this is impossible, because there are always at least two paths between input and output at any stage of control and processing.  Calories are a single measurement, but because the nutrients that provide calories (and their associated carbon) have different rates of utilization (glycemic index) and bioavailability (fiber does not get converted to energy or weight), it's an incredibly unreliable measurement.

Any combination of measurements and controls is even harder to manage and analyze.  What we ideally need is an artificial intelligence system that tracks a bunch of nutritional properties and identifies an optimal combination of them to maximize flavor while maintaining a target weight.  And it needs to be frictionless and transparent at the same time, quietly looking over your shoulder whenever you attempt to eat anything, computing how it will affect your weight management plan, and gently suggesting alternatives.  Alas, this is well beyond the state of the art in AI technology.  Training the machine learning part of such a system would take not 6000 participants, but 600,000 of them or more, each tracking every meal and a host of metabolic indicators.

The latest trend in public health interventions to manage an obese population is dietary sugar, and in particular sugary drinks.  Sugar is a powerful contributor to weight gain since it's both calorie dense and is metabolized very rapidly. It makes a big contribution to the rate of carbon input to a person's mass flow balance.  So regulating sugar input by public health officials might have some impact.

Focusing on the appetite component of the M3 model offers additional possible opportunities for management.  One of appetite's most dangerous properties is that it's insatiable.  It's not for nothing that Lay's Potato Chips once had a slogan "Bet you can't eat just one."  Eating something that's crispy, crunchy and salty doesn't satiate, it increases the desire to eat more.  

How this works is very unclear, and may be beyond the capability of current neurophilosophical thinking to clarify.  Super-appetizing foods have biased, if not overridden, the free will of a large fraction of the human population.  Philosophers arguing over whether free will exists amazingly fail to take facts like this into account. A will that is partially free and can be biased or overridden is incompatible with the content and methods of these arguments. Simply saying that these foods have been engineered to be addictive, as many writers assert, merely assigns blame without explaining the phenomenon.

But if you can't control the desire for salty snacks, you can at least improve the food value of them.  Instead of starchy chips and puffs that are metabolized vary rapidly, you can choose snacks with more protein that is metabolized more slowly, such as pork chicharrones, and ones that contain more fiber that is not metabolized at all.  It's something to look for in the grocery store.