Mostly Security Stories
An occasional blog of interesting stories and factoids, mostly about information security in the dark ages of cyberspace -- the present.
Thursday, May 02, 2024
Two great lies in financial policy
Friday, February 23, 2024
What would world civilization look like if the US collapses?
Civilization would survive just fine. But it might not be a robust high-tech 21st century civilization. That might actually be a good thing - it's hard to tell.
I've written an essay explaining how I came to this conclusion. Medium says it should take about 8 minutes to read. But if that's too long for you, here's an extended summary.
The United States in early 2024 is in a political situation where collapse into a quasi civil war like "the troubles" in Ireland seems like a possibility. Elected politicians in Texas are calling for military-aided defiance of Federal authorities, supported by governors of 25 other states. But unlike the first US Civil War in the 1860s, there is no sign of the creation of large state armies to oppose the US Army, and the states themselves are internally divided to the point where a next war would be as much of a "war within the states" as a "war between the states". Nobody in the Texas Legislature is proposing to fund the Texas Military Department to a level where it would pose more than symbolic opposition to Federal forces. It's more likely that violent opposition to the United States would take the form of "stochastic terrorism" (I prefer the term "freelance terrorism") - bombings and random mass shootings. Whether these could become focused enough to target Federal buildings and political gatherings seems doubtful.
But it's interesting to imagine what might happen if the US went into a collapse as deep as the Great Depression of the 1920s, that somehow became permanent.
The global impact of US collapse would span five realms: general economic activity, social and cultural activity, geopolitics, technological development, and environmental stability.
The loss of the US as an economic force would severely but not seriously damage the global economy. The Dollar would lose its role as the world's reserve currency, and this would have a tremendous impact. The World Bank, the Euro, and the Chinese Renminbi are waiting to take over if the situation becomes intolerable, though.
Global culture would not be significantly affected. High culture of symphonic music, fine art, and fashion has always been ruled by Europe, and would stay that way.
Geopolitically, the long-predicted end of the Pax Americana would finally be realized, though the Great Game of pre-WWI colonialism is gone forever, never to return. The Mideast would continue to be the same mess of intra-Islamic jihadism that it's been since the end of the Ottoman Empire. China's dominance in the Far East would finally be unquestionable.
Attacks on Taiwan would lead to a major technological setback, since the most powerful semiconductors are made there by TSMC. Software to use the computational power of those semiconductor devices might lose its creative momentum that originates in Silicon Valley, The tech giants are fully globalized and can easily migrate transactions and data from their already fortified datacenters to ones in less unstable areas.
Advanced electric power technology would easily be able to fill in the gap caused by the loss of the US.
When it comes to transportation, the US is no longer the uncontested leader in technology, but only a participant in a close race. The US is losing its lead in aerospace technology. The US is not even in the running for the lead in advanced railroad technology. Automobile and truck technology has long been a global competition, and the loss of US auto manufacturing would wound employment in Mexico and Canada, but not significantly elsewhere.
The environment continues to be destroyed at a rate exceeding its restoration regardless of the details of civilizational conflicts, although there are macrotrends that act to slow the rate of destruction.
As long as the High Income countries (aside from the chaos-plagued US) continue to produce pollution-reducing solutions, as Low and Middle Income Countries graduate into the upper tier (and assuming that the World Bank and OECD don't move the dividing lines) their improving governance and economic incentives will lead them to reduce their emissions as well.
As we sum up the effects of US chaos in the five realms of global civilization beyond climate, it appears that short of a global thermonuclear war, the chief threats are related to reduction of silicon and lithium processing capability for computers, photovoltaic power sources and batteries. These capabilities are concentrated in the Western Pacific, and it's essential that the rest of the world build up resiliency against disruptions there.
As long as environmental and climate deterioration can be reversed, the worst that might happen would be a reversion to the American lifestyle that was pervasive in the 1970s, before everyone had PCs and smartphones. With Total Electric Homes and electric cars in garages, this could be quite tolerable.
Tuesday, February 20, 2024
Seven simple fixes for US politics
Simple, though totally not at all easy. But in today's sound-bite environment, simple is a requirement. Half of these could be implanted by individual states without the super high threshold required for Constitutional amendments.
- Ranked choice, instant runoff voting. Reduces partisanship (parties hate this) and saves money.
- Single, open primaries. Runoff first, with a 2-candidate election from the finalists. An alternative or supplement to preference voting that further enhances voter choice. Parties hate this even more.
- Rule-based redistricting. "Non-partisan commission, appointed by politicians" is an oxymoron.
- Population-weighted Senate composition, with a two-Senator baseline, and all seats elected "at large" statewide. One person, one vote, not one state, two votes, yet preserves a Congress with two distinct Houses with differing perspectives. Fixes the inequities of the Electoral College for free.
- Term limits for all Federal elected offices. If it's good enough for the President, it's good enough for Congress and the Supreme Court. Even for the Supreme Court, "Serving during good behavior" notwithstanding, if individual retirement is allowed, then mandatory retirement is obviously also allowed. Mandatory retirement at the age of Social Security would be a bonus.
- Rotating membership in the Supreme Court. Keep the nine justices, but every election cycle retire the senior justice and install a new justice from all the justices of the Appellate Courts, selected at random from those who have yet served or from nine who have served least recently. If the Senate fails to confirm a nominee, a new nominee is selected from the Appellate Justices as before.
- Mandatory National Guard service. "A well regulated militia, being necessary to the security of a free state," requires that every able-bodied person who possesses a gun be properly trained and organized. This in no way impairs the right to keep arms, and enhances citizens' ability to effectively bear arms. Organizing refresher tours of service by random selection, just like jury duty, should not be excessively burdensome. Every new purchase of a weapon comes with free state-provided training. Free weapons are already provided to volunteer Militiamen; they should be allowed to keep de-automated ones when their tour ends.
Monday, January 01, 2024
Almost as good as free will
Stanford professor Robert Sapolsky has concluded that free will doesn't exist. I mostly agree.
Neurobiologists like Sapolsky, psychologists, and even computer scientists have realized that the brain has multiple components that independently make decisions in different domains, a point which seems to have eluded philosophers for generations. Sapolsky's point about our inability to "choose what to choose" takes that dissociation far beyond most philosophers' thinking.
Notably missing from discussions about Sapolsky's ideas are the physicist's perspective. The brain is a material object subject to the laws of quantum mechanics, which most physicists have realized is fully deterministic, following the Schrodinger and Dirac equations with incomprehensible complexity. In order to preserve free will in quantum theory, some creative physicists have concluded that "electrons have free will".
Yet even without absolute free will, our independence from the environment and other people that allows us to think and act on our own as individuals provides for an autonomous will, which should be good enough for practical and legal purposes.
Unrelated: Happy 2024!
Tuesday, December 05, 2023
The Byzantine Generals Problem also applies to politics with lies and misinformation
The classic work on the Byzantine Generals problem, arose in the context of fault-tolerant computing. The Wikipedia entry on the topic is titled Byzantine Fault. Thinking about the problem for reasons that I can't recall, I recently realized that it can apply to political systems infested with lies and misinformation. Studies of this aspect are hard to find, if they exist at all.
Leslie Lamport's 1982 paper is concerned strictly with systems that use only point-to-point communications, rather than political situations where miscommunications are broadcast to audiences of various sizes. Its successors are (almost?) exclusively about improvements to the amount of messages needed to be sent to prevent any faults at all from being concluded. The remainder are concerned with the consensus mechanisms for cybercurrencies, and rarely go into any mathematical depth about the consensus formation problem itself. I expected to find discussions of this in the economics or political science literature, but my web search skills, such as they are, didn't uncover any. Maybe their vocabulary is totally disjoint from the computer science vocabulary?
What political scientists should want to know are things like how the probability of a false consensus varies as the probabilities of any particular general generating a lie, and the number of variably lying generals changes. If everyone lies, but nobody lies very often, how much worse or better is that then a situation where some generals lie all the time?
The least bad news is that autocracies can be consistently subverted if at least 1/3 of the "lieutenants" fail to follow the generalissimo. The Achilles heel of all the variations seems to be vote-counting systems. Open voting, like legislative roll call votes, appears to be most robust to miscounts. Open counting of secret ballots can also work. It's why vote-counting machines must be fully open source.
Wednesday, November 22, 2023
Prometheus Unbound - the future of AGI after the OpenAI board upset
For tech spectators and AI participants it's been an exciting weekend. The dust has not fully settled, but it appears that the most influential AI company will end up keeping its original CEO, but with a new board of directors. All pretense of being seriously not-really-for-profit and concerned with "AI safety" (whatever that is) is now gone. It may be too soon to understand the detailed ramifications of these changes; some reports have the goal of the latest version of OpenAI's board as tripling its size and reworking yet again its organizational structure to give Sam Altman de jure control in addition to his demonstrated de facto control, with Microsoft playing a more official role this go-round. Time will tell.
People come and go, but the industry landscape is really determined by the insatiable demand of AI for compute cycles. To understand AI power relations, "follow the money" turns into "follow the chips". Who's got the chips now? Nvidia and Microsoft have introduced a new generation of them. Nvidia's H200 displaces the H100, which now becomes last year's ancient history. Microsoft introduced its Maia 100 chip a week ago. Google has had its TensorFlow chips for years, as has Amazon AWS with its Inferentia and Trainium chips. Nvidia powers the vast majority of other AI engines, including those from OpenAI.
If you look at it from the chips and datacenters perspective, the future becomes easy to see. Instead of a single AGI ruling the universe, we will have a handful of titanic AGI's ostensibly ruling from Silicon Valley instead of the Greek Mount Othrys, although their datacenters are really dispersed worldwide. Unlike the ancient gods, these artificial gods will be under the at least nominal control of their respective corporate masters.
The outcome of the "AI alignment" debate is also now clear. AGI development will be aligned not with "humanity" but with capitalism. Many people will become wealthy as a result, and a few people will become unimaginably wealthy. The future of humanity under capitalism has been uncertain for 150 years now, the advent of AGI doesn't really change this.
Thursday, October 19, 2023
Post-modern origin of species
In the 1860s we had Charles Darwin's ideas. Then in the 1940s we had a "modern synthesis" of genetics and population biology. And in the 1970s and 1980s this was tied to molecular genetics. Now we have attempts to describe speciation in even more fundamental terms. Forty years later -- t's about time.
Recently two papers have appeared in Nature and PNAS that attempt to show how to identify when natural selection is occurring in an abstract sense that could help to understand how living systems arise from non-biological systems - abiogenesis. They're not quite successful. In fact, they're so abstract that people are having trouble figuring out even what these theories are trying to do.
An article in Ars Technica is an example of this confusion. From the perspective of "publish or perish", this is a good thing, since it means that there are plenty of opportunities for easy papers explaining and correcting.
From my perspective, the problem is that they're phenomenological, showing how to recognize species, but not explaining why distinct species should even exist, or how they come about. They both assume that species come about via natural selection, although there are other mechanisms that can preferentially increase the population of some kinds of objects within a broader spectrum of varieties. When objects are created and destroyed via some inaccurate replication process, some varieties will take less energy to create, and some will last longer once they're created. These are called "thermodynamically preferred" varieties, and the laws of non-equilibrium thermodynamics (which are mathematical laws, not physical ones) will determine how fast the populations of these varieties grow and decline.
Then in chemical systems, you'll find that at some varieties have catalytic properties that amplify the rates of creation of other varieties, and, rarely, autocatalytic properties that amplify the rates of creation of themselves. The autocatalytic property may be distributed across a loop or network of reactions in a hypercycle. Neither paper provides a way to recognize or measure the existence or power of the autocatalytic advantage, although the PNAS paper would ascribe a "function" to it, once it's recognized.
That paper tries to focus on "function", but the focus doesn't really achieve the needed sharpness, because the word is ambiguous. Human brains are hardwired to see goal-oriented phenomena in as many places as possible. But for most of the history of life, goals didn't exist, and things happened because they followed inevitably from the way they were in the past, rather than happening in order to change the future by approaching an internally represented target state. In attempting to create an objective definition for function, the paper almost escapes this teleological trap, but you can tell from the uses of the term in other places that the authors hearts haven't really accepted the concept. Many of the comments to the Ars Technica story attribute this to a conflict of interest with quasi-religious goals of the foundation that funded much of the authors' work.
That's too bad. The slogan "It goes by itself" needs to become as much of everyone's way of thinking as Galileo's "Nevertheless, it moves" did 400 years ago.
Saturday, August 19, 2023
Improving assessment of authentication via some formalization: Preliminary considerations
Authentication used to be easy: collect a username and password, and check the password. Now it's so complicated that it takes hundreds of pages to specify how it works, and you have to be a talented professional to know if something built to the specification is trustworthy.
And the requirements for authentication have grown equally large and complex -- a single identity spans multiple implementations, with delegated identities, so authentication is often performed by a different organization than initial registration of an identity, and probably with different policies that need to be coordinated.
It's no longer possible for a single person to have the privileges and resources to learn and comprehend all the implementations used by a single identity. This means that even if you're a specialist in authentication systems, you can't be sure that the authentication framework that's used by the people that you're responsible for actually fulfills its requirements. If you're an ordinary user, you can only trust that the social-economic effect of millions of other users like you has enough of a cumulative effect towards trustworthiness that the system is reliably usable.
Perfect trustworthiness is impossible. It's not even possible to clearly and consistently judge how close to perfection we actually attain with real-life systems. But we can make it easier to understand how it all works and to analyze where the weak points are. Formal methods are the standard recommendation as the way to assure consistency in designs: they replace ambiguous verbal descriptions by strictly defined notations. But if the problem is complexity, the formal descriptions must be just as complex as the verbal descriptions, and they will make the unattainable demands on the mental capabilities of the security specialists who are trying to use them even greater, since they define yet another language that must be learned and understood, in addition to the natural English of the informal descriptions.
We need tools that will ease the burden of validation of authentication systems by automating the consistency checks themselves. And we need those tools to be usable without imposing their own intolerable complexity demands on their users.
We could start a search for such tools by looking at automated proof assistants, like Coq and Lean. These turn out to be written for mathematicians, not practicing developers who need to prove the correctness of real-world software, much less application specialists like security analysts. Maybe we could use languages based on principles learned from proof assistants, such as dependent types. But no, these are still mostly research projects, and the most promising of them, Agda and Idris, aren't under active development any more, and the dependent type language developed in The Little Typer is a toy language not intended to be used seriously.
Making a long story short, we could look at popular functional languages like Haskell and OCaml, and reject them as being contaminated by too much syntax to learn for the value they provide in utility as modeling tools. (Figuring out what functional languages are good for, if anything, is a continuing adventure.)
In the end, we want a small set of properties in our modeling language:
- Static typing, because we want to check the model, not execute it.
- Classic Euler function syntax, i.e. f(x), rather than some Polish notation with too many parentheses (Lisp, typed Racket) or with no parentheses at all (Haskell, OCaml).
- Functional capability, in order to capitalize on the amazing proof properties of the Curry-Howard-Lambek correspondence if we can, as well as all the other integrity-enhancing properties of the functional programming paradigm.
- Minimization of the amount of transformation needed to process JSON descriptions, since we want to describe the essential properties of authentication systems as a finite-state machine, in a simple, well-known data description language like JSON.
- Completeness: that the descriptions don't have undocumented gaps where loopholes and backdoors can lurk, and that the descriptions themselves aren't so complicated to understand that we inadvertently skip over key parts, and miss important errors that they might contain.
- Consistency: that all the components of the description fit together as claimed
- Clarity: that the descriptions don't rest on ambiguities inherent in natural languages in order to achieve a false sense of consistency
- Absence of hidden weakening: "A chain is only as strong as its weakest link." Complex systems contain many points where it is possible for weak cryptography to slip in without notice, often in the form of short or weak keys, or as obsolete, broken algorithms.
- Key traceability, in two forms:
- Password identifiability: all users who can create, view or change a password are known. All too frequently, there are privileged administrators who can compromise security without any evidence of their misbehavior being recorded. This is of course a key concern for privacy maintenance of information that isn't security keys, as well.
- Auto-generated randomness: many security algorithms are dependent on the system generating a random number that is often immediately used and discarded, but other times may be preserved for a long time, e.g. across system restarts. It's important to know where these numbers originate from, and that they are cryptographically secure, i.e. unpredictable in the short run as well as unpredictable in the long run.
- Secure events are securely logged: Logging of key events should be onto write-once media, or distributed onto a public blockchain that is immutably and irretrievably copied.
Sunday, April 16, 2023
Foraging as a unifying strategy for neurobehavioral research programs
I follow a few computational and behavioral neuroscientists, and have noticed that they sometimes mention foraging in their research summaries. I've recently been realizing the brilliance of foraging as a coordinating framework for a research program in those areas. Foraging provides both evolutionary support and ecological validity to link lab studies with animals' situations in nature, over a vast range of capabilities. Here's an outline of how that works:
Consider the notation "-->" to mean "provides an evolutionary base for the emergence of". Then...
Note: the sequence below is not a strict hierarchy; related hierarchies evolve independently in parallel
- passive foraging (e.g. corals) --> gradient following foraging (jellyfish, mosquitos)
- gradient following foraging --> path creation foraging (ants, herbivores)
- path creation foraging --> goal-oriented foraging
- goal oriented foraging --> route planning
- route planning --> global optimization of route traversal resources
- route planning with limited resources --> "mental" route planning
- mental route planning with limited cognitive resources --> cognitive load management
- cognitive load management --> mental introspection
- mental introspection --> consciousness
The perceptual aspects of foraging are multi-factorial, and their evolution is even less strictly hierarchical than foraging as a whole. Key perceptual transitions include:
- open field, and path network foraging --> localization of self in an "allocentric" environmental landscape
- allocentric maps --> distinction between self and other
- objects with complex properties --> indirect "signs" of foraging goals
- bounded perceptual processing abilities --> attention
- attention --> endogenous control of perceptual salience
- endogenous attentional control --> signification overshoot
- signification overshoot --> the "hard problem" of consciousness
Wednesday, March 29, 2023
Subsidizing green mines could reduce bitcoin's environmental damage
As one of the largest consumers of electricity in the world, it is important to reduce the climate impact of bitcoin mining by incentivizing mine operators to reduce their use of fossil-fueled energy. While it is still not the best and highest use of that energy, a bitcoin mine powered by dedicated solar, wind, hydro, or natural hydrogen sources does minimal harm to the atmosphere. Because their only product is small amounts of internet traffic and waste heat, mining datacenters can even be located near their power sources, eliminating the need for expensive, hard to approve long distance transmission lines.
Bitcoin investors and users are likely to be willing to pay a small premium for bitcoins and bitcoin transactions that promote environmentally sound mining practices. Bitcoin exchanges can deliver this premium to green and gold mines via the mining pools that they use, despite the untraceability of failed hash computations. Exchanges can enhance their brands by working with mining pools to develop certification programs that validate the environmental impacts of the bitcoin mines that they work with. These incentives will provide bitcoin mine operators with additional motivation to use renewable energy beyond those sources' steadily increasing cost advantage.
Green, Blue, Gray, and Gold bitcoin mines
But that doesn't leave carbon footprints of bitcoin mining totally unmanageable. A way to begin is to start tracking the environmental impact of each bitcoin mine. Identifying and documenting the details of each mine is not really feasible, but we can take a lead from the hydrogen production industry, and identify three major categories of impact. "Green mines" are mines that are powered exclusively by renewable energy. "Blue mines" are mines that may be powered by non-renewable sources, but which take the output of those sources and effectively mitigate their impact, most likely by sequestering the carbon dioxide those generators produce. Those datacenters in the Permian Basin that consume excess natural gas to mine bitcoin are halfway to blue bitcoin, but they need to take their CO2 exhaust and pump it back into the ground. The could even use that CO2 in its supercritical form for enhanced recovery of oil, but it would be an accounting nightmare to try to track the secondary CO2 produced by burning that oil. Failing to disqualify enhanced recovery uses for a "blue mining" label would create a serious loophole in a labeling program. Mining operations that use fossil electricity without carbon capture would be called "Gray mines". Powering a bitcoin mine with electricity from geothermal sources or with geological natural hydrogen could even be called "gold mining".
Projects like the Cambridge Bitcoin Electricity Consumption Index that report the global energy consumption of bitcoin mining don't link individual mines to their energy consumption, but instead estimate the type of equipment that mines are likely to be using, with mine locations based on anonymized, voluntary reports from mining pools. They could enhance their environmental impact assessments by incorporating government data on regional power production mixes, and such estimates can affect individual mine managers' power sourcing decisions indirectly via their mining pools.
There's another version of this note on Medium with more context.
Saturday, January 14, 2023
Pricing monoclonal antibody treatments
The FDA has approved another treatment for early-stage Alzheimer's Disease that targets the amyloid plaques that are a hallmark of its effects on the brain. The treatment—lecanemab, brand-name Leqembi, made by pharmaceutical companies Eisai and Biogen—is an intravenous monoclonal antibody that targets amyloid-beta proteins, which accumulate in plaques in the brains of people with Alzheimer's. Researchers have not yet conclusively determined if amyloid plaques are a root cause of the disease. There are many things that go wrong in the brains of Alzheimer's patients, and it may be wishful thinking to look for a "silver bullet" single treatment..
Like aducanumab, side effects of lecanemab can be severe, even life-threatening, and like acucanumab, Medicare will pay for the treatment only in the context of an ongoing clinical study. As of this writing, no studies are planned, which means that only wealthy people will actually get the treatment.
Eisai and Biogen have priced lecanemab at $26,500 for a year's supply. To many people who only see drug prices for over-the-counter products, that seems like a lot of money. But is it really?
It's a basic principle of price identification in capitalism for sellers
to charge "all that the market will bear", and let competition between
sellers exist in order to generate a functioning market. Patents for
things like drugs prohibit the market-based pricing mechanism from
existing, and all we're left with is monopolistic pricing, where the
"competition" is the willingness of the buyer to do without the product.
When the buyer's alternative is death, this mechanism doesn't work
well.
Putting a price on a long, slow decline to death isn't easy, but that
doesn't stop people. According to a press release from the company, Eisai has devised a mathematical formula to compute the
lifespan quality of life alleviated by their product given its measured
effectiveness, and priced it substantially below that. If you believe
their formula and the results of their clinical trials, it's a good
deal.
Like aducanumab/Aduhelm, lecanemab is a monoclonal antibody, so its
pricing can be compared to monoclonal antibodies used as treatments for
other diseases. The 42 monoclonal antibody treatments listed by
pharmacy discounter GoodRx have a median price per dose of $5275, ranging from the government-set $3/dose for some Covid-19 treatments to $239,020/dose for a chemotherapy treatment.
The Alzheimer's treatment from Eisai is given 24 times a year, making
its price $1104/dose. Compared to other monoclonal antibody treatments
for other diseases, 80% cheaper per dose coud be considered to be pretty
low priced.
Monday, December 19, 2022
Who are official directives to wear face masks protecting? Not you.
Two years into the Covid-19 pandemic, the landscape of the risk and what to do about it has changed. But the response of public health officials and experts and random people with opinions has not. The data on what's happening is still bad, so even the most thoughtful of expert assessment isn't as good as it really needs to be. And the politicization of the response has created a situation where public statements have to be phrased in a way that impels people to do the right thing even if it's for the wrong reasons and supported by inappropriate facts.
The general population can't distinguish between the virus (SARS-Cov-2) and the disease (Covid-19). The public health surveillance data for the virus is much better than the data for the disease. This means that even experts who should know better talk about them as if they were the same. They end up fighting the virus, not the disease. Journalists who need a hot, grabby story are motivated to find the most severe way to write or talk about the pandemic.
Here's how to think about the situation if you want to react in a more sophisticated way than by following guidance that is oversimplified so that it can motivate the entire population, even those parts of the population that need super-simple instructions or are skeptical of or even opposed to official directives.
Basic principles
- Reduce exposure to the virus
- Stay away from confined areas with poor ventilation
- If you have to go into risky areas, with lots of people who may be infectious, protect yourself - wear a good mask.
- Reduce your susceptibility to the disease by getting vaccinated and boosted
Rules to protect yourself
- Vaccination is better than masking
- Boosters make vaccination even more effective
- Any mask is better than no mask
- Wearing a mask while being vaccinated is better than either one alone
- Cloth or surgical masks protect the people around you more than they protect you
- To protect yourself, wear a standard rated mask
- There are lots of standards: N95, KN95, FFP2, KF94, and more. It took a bit of searching to find a thorough survey of most of the important ones.
- A rated mask with a valve protects you, but it doesn't protect people around you. So use a mask without a valve. If you are infectious but still feel OK (asymptomatic), don't infect others inadvertantly. Even that article misses this point.
Saturday, December 03, 2022
Mini-review of Alastair Reynolds' "Eversion"
Alastair Reynolds is in the top tier of my favorite authors. Eversion is his latest novel. It's probably best read as a mystery story, with the mystery being what novelistic form is it following.
Is it sci-fi horror, like Reynolds' own Diamond Dogs? Is it a series of parallel, interlinked stories set in different time frames, like David Mitchell's Cloud Atlas, or Simon Ings' Dead Water? Or is the linkage between the stories a psychic one, like Philip K. Dick's Ubik? It contains a scary, mysterious object, like H.P. Lovecraft's At the Mountains of Madness, or Iain M. Banks' Excession. And the object turns out to have a mathematical character, like many of the objects in the stories in Clifton Fadiman's collections Fantasia Mathematica and The Mathematical Magpie. The problem with the object might be related to the problem of sphere eversion.
Without giving away a significant spoiler, it turns out to be all of these, and more. Fitting all these challenges together is a high challenge, and Reynolds almost succeeds. But Reynolds is not really a stylist, and this kind of story needs absolute mastery of style in order to make its shifts of context enjoyable. The stylists in the list above, Mitchell and Ings, don't have the command of technology to make Eversion's other demands succeed, though. If I were more of a fan of horror, I might have found it totally satisfying. I'm hard to please.
Bookseller note: Goodreads is owned by Amazon, so take its bookseller recommendations with a grain of salt. I try not to buy from Amazon these days, because of its anti-competitive practices. But it's hard to find any large company these days that doesn't have an anti-competitive streak.
Wednesday, November 30, 2022
Too much planning to survive can reduce survival
It's a long, strange path that our ancestors have taken to get to our level of cognitive processing. It's understandable, but not forgivable, that many philosophers don't bother tracking it all the way through from microbial beginning to the latest cultural edifices.
Everyone knows that evolution works by "survival of the fittest". Which is almost a tautology since "fitness" is defined in terms of number of descendants who survive to reproduce themselves.
Less well understood is how species with complex individual members come about. It occurs because there is always variation in complexity, and some variants acquire increased fitness by virtue of some aspects of their complexity. Because "there is always room at the top", there's a general trend towards ecosystems hosting populations with greater complexity.
Then at some point in the evolution of greater and greater complexity, the ability of individuals to make plans can appear. This can take a long time, but nature can take as much time as it needs; it's not on any particular schedule.
Among the things that planning can do, is make plans to survive. An organism that can make plans intended to enhance its survival in certain situations (and then execute those plans) will have a greater likelihood of surviving those situations than an organism that just reacts to the immediate aspects of them.
However, the process of planning consumes cognitive resources and attention. Computational and game-theoretic analyses of the planning process have shown that comprehensive planning involves a search through a space of all possible sequences of actions that grows exponentially in the size of the problem space, or equivalently in the depth of search traversed before a particular plan of action is abandoned in favor of an alternative. The game of chess is the classic example of planning in a situation whose solution is beyond the capability of any human or computer yet built.
Cognitive resources used by planning might be more effective in enhancing survival if they were applied to reacting quickly and precisely to situations, rather than focusing on planning. The stereotype of the "absent-minded professor" is an example of this kind of misallocation of resources.
Thus, the most effective kind of planning is resource-bounded, making heuristic estimates rather than carrying the planning through to a conclusion. This creates an inverted-U shaped function of the effectiveness of planning in enhancing survival vs the amount of resources applied to planning.
Maximizing survival involves finding the sweet spot between planning and action. Finding and maintaining planning activities at this sweet spot requires identifying and controlling the depth and comprehensiveness of planning. Ability to exercise this kind of control provides its own survival advantages, with its own inverted-U properties.
Higher order control of planning is one of the cognitive processes involved in consciousness. Recognizing this provides part of the answer to the questions of the usefulness of consciousness, and to the evolutionary origin of consciousness.
Wednesday, October 26, 2022
Passkeys - a password killer at last?
Betteridge's Law of Headlines is right again. Nope.
Ars Technica has an enthusiastic article triggered by an announcement that a few more companies have jumped on the FIDO Alliance Passkey bandwagon. The Ars commentariat remains skeptical.
I've not yet encountered a passkey authentication prompt in the wild, so maybe that bandwagon isn't rolling as fast as its sponsors would like you to believe. The key thing to try to understand is who the audience for passkeys is: It's the connected person in a connected world. If you're a person who has a phone, a smart watch, a notebook, and a desktop PC, and your house has an Amazon Echo or three, and you despair of keeping your accounts synchronized and secure, passkeys might help.
If you're a low-tech person, or a security-conscious person who doesn't trust the ability of tech giants to create and manage securely interoperable infrastructure, this is just more unwanted complexity.
For example, my Mother lives in a small town, and her bank's website doesn't support even basic SMS or voice callback two-factor authentication, because their customer base is so unsophisticated that they wouldn't tolerate the hassle.
Enormous companies like ATT are so disorganized that they can't manage a two-factor system that supports more than one phone at a time. To think that they'll be able to do a clean, secure job of deploying passkey technology is laughable.
Yubico, who makes security tokens, has a nice chart showing how deeply dependent passkeys are on having smart devices fully connected to the cloud. If you ever travel out of range of cell service, you're out of luck.
Keep buying the latest and greatest model of all your devices, and you be OK most of the time. Stay in your box, and you'll be fine.
Thursday, June 23, 2022
What is it like to be yourself?
Thomas Nagel's famous essay "What is it like to be a bat?" has been impairing people's ability to think about consciousness for some 48 years. That's a remarkable accomplishment. It would take a far longer article than we have space for here to survey all the writing it has stimulated, but there is a bit of news to remark about.
Nagel's philosophical goal is to convince you that there are important aspects of consciousness that are beyond the reach of science. His major method in doing this is to first, convince you that he understands consciousness better than you do. This is hard, because as a conscious person, you have uncontrovertible knowledge of your own consciousness. But for many readers, and especially trained thinkers like philosophers, he amazingly succeeds. There are two primary ways that he makes this happen.
First, he commits a basic rhetorical fallacy, closely related to the "appeal to authority" that is in every list of rhetorical blunders. I've come to call his error "argument by failure of imagination" and it goes like this:
- I'm a smart person. (This is the appeal to authority. Nagel is a well regarded professional philosopher, and part of the job of a philosopher is to perform smartness.)
- I've studied this topic thoroughly. The topic contains a problem X.
- In my studies, I've covered every imaginable solution to X.
- I've failed to find a solution. I can't imagine how X might be true.
- Therefore X is false.
- experiencing what it is like to be a bat (impossible)
- experiencing what it is like to be a cat (impossible)
- experiencing what it is like to be an ape (impossible)
- experiencing what it is like to be someone of the opposite sex (men and women are inescapably, mysteriously different)
- experiencing what it is like to be someone of the same sex (impossible. Sorry, bro.)
- experiencing what it is like to be your twin sibling (impossible)
- experiencing what it is like to be yourself (impossible)
Thursday, April 21, 2022
Where to spend a billion dollars improving the world? Desalination tech R&D.
Updated at the end...
Conor Friedorsdorf has a columm in The Atlantic. This week, he asked " Say you received $1 billion to spend on improving the world. How would you spend it? Why? "
Sunday, March 06, 2022
The long game in Ukraine
In the excitement of the start of an invasion, there's not much discussion of how this situation might play out in the long term. Here are some thoughts.
In a modern war, the losses for every side exceed their gains - there are no longer true winners. “Winning” means losing less than the other guy.
Economic losses for Russia are larger than economic losses for Ukraine, simply because Russia is larger. Russia has lost already, even if they don’t know it.
But Putin’s eyes are on empire, not on his peoples’ welfare. An empire of destitute serfs is still an empire.
Militarily, the fall of Kyiv and replacement of its government by a puppet “Belarus South” would be satisfactory to Putin. Nothing less is likely to suffice.
Even if he has to back out this time, he's going to keep trying one way or another.
It looks to me like there are three strategies that lead to continued Ukrainian independence.
1. Regime change in the Kremlin. This is unlikely. Even without Putin as leader, a new Putin-wannabe is likely to take his place. Russia’s repressive kleptocratic bureaucracy will take generations to replace, regardless of who leads it.
2. Protracted insurgency: Ukraine becomes a European Afghanistan. The Afghanis beat back the Russians in the 1980s and the Americans in the 2000s. The Ukrainians appear to have the spirit to do the same.
History blog A Collection of Unmitigated Pedantry has an extensive review.
3. Logistical interdiction. Napoleon famously observed that “an army travels on its stomach.” Houstonians have experience with being stuck in a 40-mile long traffic jam. Now imagine one in wintertime!
As long as the west can keep Ukrainian troops supplied with ammo and anti-tank and anti-aircraft missiles, and keep Ukrainian planes in the air, targeting fuel trucks can keep Russian troops trapped in their vehicles, waiting to be taken prisoner when they run out of supplies.
Experts remain baffled about why Ukraine is not attacking that famous 40-mile long convoy on its way to Kyiv. Maybe they know that the convoy has stalled out on its own, and they're deploying their scarce resources elsewhere.
Alex Vershinin’s analysis at War on the Rocks last November has details on Russia's logistical doctrine.
If we see Russian forward bases being built and roads being kept open for supplies, this strategy will have failed. But so far, the kind of construction that the US performed at Baghram Air Base in Afghanistan is not being reported in Ukraine.
Saturday, February 05, 2022
Money and Payments: The U.S. Dollar in the Age of Digital Transformation
The US Federal Reserve has released its long-awaited study of a digital dollar, exploring the pros and cons of the much-debated issue and soliciting public comment.
Here's what I think, in the form of my comments on the 22 questions that paper requests feedback on.
CBDC Benefits, Risks, and Policy Considerations
It could make leveraged and weakly or non-collateralized stablecoins even more obviously untethered to their supposed underlying currency than they already are.
Reduced risk is intrinsic to the existence of a CBDC; it cannot be fully compensated for, but the development of insurance programs for loss and transaction failure beyond existing programs such as FDIC could make progress towards equalizing this advantage.
Transaction fees on CBDC activity would mitigate the threat to existing financial institutions due to its reduced cost. These fees would act as a source of income for the Federal Reserve, and help make CBDC operations self-funding.
Cross-border payments are being disrupted by stablecoins and other internet-based transfers; their future success remains unclear due to varying regulatory regimes in different nations.
CBDC Design
To provide support for advanced uses, every CBDC operation should have a counterpart API, adhering to well-documented standards.
Monday, November 08, 2021
Automated social engineering of SMS enhanced authentication
There’s a journalist’s slogan about the threshold for a reportable story that goes something like “Once is an accident, twice is a coincidence, three times is a trend.” Looks like bots that capture one-time-passcodes sent via SMS without human interaction have passed that level, as a report from Motherboard details.
The NIST has been deprecating SMS as a factor in multi-factor authentication since 2017, because SMS-based OTPs are vulnerable to man-in-the-middle attacks. These attacks are now automated. Once the attacker has gotten your username, password, and phone number as part of a cache of breached millions that they’ve acquired, they’ll do a partial login to your account on a target system, then the authentic system will send you your SMS code, and the attacker’s bot will phone you and pretend to be the target’s security organization and ask you to confirm the SMS code via your phone keypad.
I found out about this via a finance blog that I follow. When a security vulnerability has reached a level of visibility that financial pundits are writing about it, it’s time for financial systems to convert to a more secure method. The report from Intel471 offers some suggestions
More robust forms of 2FA — including Time-Based One Time Password (TOTP) codes from authentication apps, push-notification-based codes, or a FIDO security key — provide a greater degree of security than SMS or phone-call-based options.
Unfortunately, sometimes even simple solutions like physical tokens are unfeasible. The small town bank that my mother uses has a customer population that is so unsophisticated that they are unable to handle any kind of online authentication more sophisticated than a password, and Mom has trouble remembering hers.
Saturday, September 18, 2021
What's wrong with Haskell?
Whenever discussions of programming turn to functional programming languages, Haskell is invariably brought up as the best example of its kind, or at least the most well-known. It's eclipsed Lisp, which is a good thing, I suppose, but Haskell has such severe problems in dealing with the real world, it would be an improvement if this were funny.
Maybe that's its most fundamental problem, that functional programming is not for dealing with the real world, it's for dealing with mathematical things, like functions. When you try to make it deal with the real world, you have to add a bunch of ugly things to the language to get it to cope adequately.
Explaining why this is so could get really long, but you deserve to know what Haskell's top problems are without having to wait for me to write a long essay and then for you to have to read it. Here, then, are my top peeves with Haskell.
- Lazy evaluation of function arguments. I understand that strict evaluation causes problems when functions are given arguments that don't terminate or have other problems and those arguments are required to be evaluated because they come first, even though they are thrown away at later points in the function that they are given to. This means that every language that tries to have strict evaluation really needs to have at least one quasi-function that disobeys this rule and does lazy evaluation. This is usually the "if" function, or a "case" or "cond" function that uses shortcut evaluation and returns as soon as one of its arguments succeeds, before evaluating all them.
But lazy evaluation makes it difficult to create a mental model of what a complicated series of function applications ends up doing, because you have to understand the sequence of applications all the way to the end before you can be sure that you've accounted for the effects of all its arguments. This leads to excessive demands on mental working memory, which leads to programming errors. Functional programming advocates argue that this isn't a problem, because modeling execution is bad practice; you're supposed to prove properties of the program instead by reasoning about them.
Lazy evaluation also makes some cool things possible, like potentially infinite data structures that don't cause problems because functions on them always return before running out of data. In reality, I've only seen one use of infinite data, and that was in Conal Elliott and Paul Hudak's functional reactive programming idea. FRP used an infinite list of input events instead of an infinite series of "read" function calls inside an IO monad. - Monadic IO. Using the category theoretic concept of a monad to force sequential execution of functions into a lazy evaluation framework where argument application doesn't need to occur in the textual order presented in the program code is a brilliant solution to a hard problem, but it's the wrong answer. Monadic IO is presented as a way to textually encapsulate the side effects of interacting with a stateful world outside of a program, giving freedom from side effects to the rest of the program, but in far too many instances the entire program is wrapped in an IO monad, leaving the stateful side effects just as pervasive as they ever were.
An alternative design direction would be to think more deeply about the notion of "referential transparency" and build control over referentiality into the language. Instead of a simple concept like "functions with the same arguments always give the same result" and calling violations of this rule "side effects" to be avoided, go back to W.V.O. Quine's short discussion in the book Word and Object, and understand what's going on as the names "Cicero" and "Tully" are both different and the same as they refer to that guy who lived in Rome 1,900 years ago.
Control of referential depth exists in languages like C that provide pointers, but as far as I know pointers have never been successfully implemented in a language with strong typing that can prevent "attempt to dereference null pointer" errors at compile time. Most languages that attempt to deal with this problem do it by eliminating pointers entirely, and create special cases for situations like strings where references are the natural way to deal with the type. In lisp you have the 'quote and 'unquote forms that partially deal with managing referential depth in a general way, but those work because lisp is dynamically typed. - Strong typing that's not quite powerful enough. Haskell was one of the first functional languages to incorporate strong typing that was powerful enough to be useful, for certain senses of "useful", and that was a significant achievement. But Haskell's type system is first order, and that causes problems. It means that flat data structures like vectors and arrays either have to be faked as lists of lists, or restricted to be of unique size. You can define something like a int_vector_len10, but there's no way to express a type int_vector(size) where the size is defined at compile time.
Functional languages with type systems that support such parametrized types exist, and their type objects are called "dependent types". Unfortunately the people who design dependently typed languages are mostly interested in tools for doing mathematics such as proof assistants, not in tools for doing more general things, so their support communities are small and specialized, and they don't evolve the libraries and tooling needed for broad use. We can only wait and hope that dependent typing technology becomes well known enough that it will migrate into more general purpose functional languages. Maybe even a future generation of Haskell, who knows? - Pervasive Currying. Currying allows you to convert functions with multiple arguments into functions with one argument, which is a function of the other arguments. It allows you to get rid of all those parentheses which are so annoying to people who are just learning lisp. This is cool, but bad. Absence of parentheses obscures program structure by requiring a programmer reading an unfamiliar piece of code to look up and keep in mind the type definition of every function that's in use, in order to determine the abstract structure of the code. This introduces a memory burden on the reader that slows understanding and permits misunderstandings of what's going on. In a large program that uses library functions such as those in Haskell's standard prelude, it's a serious problem.
With parentheses or other delimiters, you can discover the code structure without having to understand the specifics of every element. Authors of Haskell programs are subconsciously aware of this problem, and use line breaks and indentation to suggest the structure. But these hints are not required by the complier, and cannot be trusted.
Maybe the right way to think about functional programming, especially with typed functional languages, is that it's a declarative framework for thinking about and expressing provable facts about complex functional expressions. This is an important goal for programming and creating tools that facilitate it is an admirable endeavor. The fact that those expressions can be executed and produce results is almost an irrelevant afterthought.
Declarative programming entails a radically different mindset from programming in a style that reflects the hardware structure of computers. Many smart people believe that its better to think in this style than in a more conventional style. But maybe you thought that computers could be used to interact with the real world, and maybe even model its behavior? What kind of a wierdo are you?